MIT OpenCourseWare
  • OCW home
  • Course List
  • about OCW
  • Help
  • Feedback
  • Support MIT OCW

Projects

The project counts as part of the Homework grade.
Banded Matrices Project
Due: By week #11

Most matrices that arise in practice, e.g., from discretization of partial differential equations by finite differences or finite elements, are sparse. A careful treatment of sparse matrices results in significant savings of computational effort and memory allocation. We will try to navigate together through a simplified version of this process.

QR factorization
For full matrices, the cost of the QR factorization using Givens rotation is twice as large as if Householder reflections are used. Consider a tridiagonal matrix. Write a program using Givens rotation to compute the QR decomposition of the matrix, and another one using Householder reflections. Do not form the matrix Q; rather, store the angles and vectors corresponding to each rotation and reflection, correspondingly. See page 123 from Danmel's book for more implementation details of the Givens rotation. For a general banded matrix, discuss the sparsity pattern of the Q and R matrices from the QR decomposition of symmetric and non-symmetric tridiagonal matrices.

LU factorization
Implement the Cholesky factorization (without pivoting) for banded positive definite symmetric matrices of size n and band p. Do not store the entries of the Cholesky factor over the original matrix.

Hessenberg Form
Write a program to compute the Hessenberg form for a banded matrix. Make sure you take advantage of the sparsity of the matrix and What can you say about the sparsity pattern of Q? Experiment with symmetric banded matrices as well.

Final Comments: Assuming the size of the band p is much smaller than the size of the matrix n, estimate the computational cost of each of the programs for non-symmetric and symmetric matrices (where applicable). Give numerical evidence of these estimates.