This course provides a comprehensive introduction to algorithms for sparse matrices, focusing on efficient storage schemes, computational techniques, and applications in scientific computing and machine learning.
Course Description
Sparse matrices arise naturally in many applications including numerical solutions of partial differential equations, network analysis, and machine learning. This course covers fundamental algorithms and data structures for efficiently storing and computing with sparse matrices.
The course explores:
- Sparse matrix representations and data structures
- Matrix-vector multiplication algorithms
- Direct methods for solving sparse linear systems
- Iterative methods for sparse linear systems
- Preconditioning techniques
- Eigenvalue problems for sparse matrices
- Parallel algorithms for sparse matrices
- Applications in scientific computing and machine learning
Prerequisites
- Linear Algebra
- Data Structures and Algorithms
- Programming proficiency in Python, C, or C++
- Basic numerical analysis
Topics Covered
- Sparse Matrix Formats: COO, CSR, CSC, ELLPACK, HYB
- Graph Models: Adjacency graphs, quotient graphs, elimination trees
- Matrix Ordering: Minimum degree, nested dissection, approximate algorithms
- Direct Methods: Cholesky factorization, LU decomposition for sparse matrices
- Iterative Methods: Conjugate gradient, GMRES, BiCGSTAB
- Preconditioning: Incomplete factorizations, multigrid methods
- Eigenvalue Solvers: Lanczos algorithm, Arnoldi iteration
- Applications: Finite element methods, network analysis, PageRank
Textbooks
- Timothy A. Davis, “Direct Methods for Sparse Linear Systems”, SIAM, 2006.
- Yousef Saad, “Iterative Methods for Sparse Linear Systems”, SIAM, 2003.
- Alfred George and Joseph W.H. Liu, “Computer Solution of Large Sparse Positive Definite Systems”, Prentice Hall, 1981.
Assessment
- Programming assignments (40%)
- Midterm exam (25%)
- Final project (35%)