IEEE Transactions on Parallel and Distributed Systems vol:25 issue:1 pages:116-125
The sparse matrix–vector multiplication is an important kernel, but is hard to efficiently execute even in the sequential case. The problems –namely low arithmetic intensity, inefficient cache use, and limited memory bandwidth– are magnified as the core count on shared-memory parallel architectures increases. Existing techniques
are discussed in detail, and categorised chiefly based on their distribution types. Based on this new parallelisation techniques are proposed. The theoretical scalability and memory usage of the various strategies are analysed, and experiments on multiple NUMA architectures confirm the validity of the results. One of the newly proposed methods attains the best average result in experiments, in one of the experiments obtaining a parallel efficiency of 90 percent.
(preprint published online on IEEE website http://www.computer.org/csdl/trans/td/preprint/06463397-abs.html)