Topics of Wilfried Gansterer (wilfried.gansterer (at) focus on various aspects of numerical algorithms (Version 24.10.2023). For these topics, interest in numerical algorithms and (large-scale) matrix computations as well as in high performance computing and parallel computing is usually required. You find a list of currently open topics below, but you can also contact me and suggest your own project idea!

  1. Mixed-Precision linear solver for FPGAs:
  2. State-of-the-art check-pointing for achieving fault tolerance in large-scale computations
  3. Fault tolerant iterative linear solvers
    1. Interpolation-based fault tolerance for the GMRES algorithm
    2. Exact state reconstruction for the GMRES algorithm
  4. Robustness and fault tolerance in training and inference of (deep) neural networks
  5. Gaussian process regression for moving sensors
  6. (Spectral) divide-and-conquer algorithms for solving large-scale eigenvalue problems
  7. Efficient sparse tensor decomposition (look here)
  8. Communication Avoiding ILU0 Preconditioner (look here)
  9. Mixed Precision Low Rank Approximations and their Application to Block Low Rank LU Factorization
  10. Randomized Low Rank Matrix Approximation: Rounding Error Analysis and a Mixed Precision Algorithm. (look here)
  11. Replacing Pivoting in Distributed Gaussian Elimination with Random Techniques
  12. Quantization of rank-one matrices
  13. Solving systems with multiple right-hand sides.
    "Block Conjugate Gradient algorithms for least squares problems"
    "A breakdown-free block conjugate gradient method"
    "Product Hybrid Block GMRES for Nonsymmetrical Linear Systems with Multiple Right-hand Sides"
    "A block minimum residual norm subspace solver with partial convergence management for sequences of linear systems"
    "Solving multiple linear systems with multiple RHS with GMRES"
  14. Block Gram-Schmidt methods
    Literature: "An overview of block Gram-Schmidt methods and their stability properties"
  15. Krylov methods for inverse problems
    Literature: "Krylov methods for inverse problems: Surveying classical, and introducing new, algorithmic approaches"
  16. Discrete Representation Learning for Variational Graph Auto-Encoder: Variational graph auto-encoder (VGAE) [1] is a framework for unsupervised learning on graph-structured data. This model employs latent variables and is capable of learning interpretable latent graph representations. The project's main aim is to propose a simple generative model that learns such discrete representations by applying Vector Quantised-Variational AutoEncoder (VQ-VAE) [2] to the VGAE.

    [1] T.Kipf, and M.Welling, Variational Graph Auto-Encoders, published at NIPS Workshop on Bayesian Deep Learning 2016.
    [2] A.Oord, O.Vinyals, k.Kavukcuoglu, Neural Discrete Representation Learning, published at NIPS 2017.