Large-Scale Sparse Matrix-Vector Multiplication

Large-Scale Sparse Matrix-Vector Multiplication

Dong Zhou (dongz@cs.cmu.edu)
Mu Li (muli@cs.cmu.edu)

A large number of machine learning algorithms uses gradient descent like numerical method to get the optimal solution, such as page rank, linear classificiation, collaborative filtering. At its core, matrix vector multiplication is the essential computation. In this project, we focus on optimizing the single machine version of sparse matrix-vector multiplication. We want to handle the case that the spare matrix exceedes the capacity of the memory. Besides, we would like to explore cache-friendly, NUMA-aware optimizations.

Documents

  • Proposal (pdf)
  • Milestone Report (pdf)