rustworkx icon indicating copy to clipboard operation
rustworkx copied to clipboard

Migrate PageRank and HITS implementations from `sprs` to `faer`

Open IvanIsCoding opened this issue 1 year ago • 1 comments

What is the expected enhancement?

Update the code in https://github.com/Qiskit/rustworkx/blob/main/src/link_analysis.rs to use https://github.com/sarah-ek/faer-rs/.

The functions should still rely mostly on the power method to estimate the eigenvector. And they should still use sparse matrices. faer provides a function to multiply a dense vector by a sparse matrix that we should use https://docs.rs/faer/latest/faer/sparse/linalg/matmul/fn.sparse_dense_matmul.html.

The motivation is mostly for consistency with Qiskit, that already uses faer (https://github.com/Qiskit/qiskit/blob/1e8205e43d72cf2a186a8013072aa6b2c5cd1196/crates/accelerate/Cargo.toml#L23). We might need to benchmark the code to check that there is no performance regressions though.

IvanIsCoding avatar Jul 11 '24 03:07 IvanIsCoding

FWIW, I don't think consistency with what qiskit is using is enough of a reason to do this. I originally suggested investigating faer to see if we could use it to accelerate the eigendecomposition instead of doing the power iteration method we're using now. I would only think we should do this if there is a performance improvement. Otherwise we can just stick with sprs.

The other place this could be useful is in the eigenvector and katz centrality implementations in: https://github.com/Qiskit/rustworkx/blob/main/rustworkx-core/src/centrality.rs

mtreinish avatar Jul 11 '24 21:07 mtreinish

https://github.com/Qiskit/qiskit/issues/13665 I am going to close this.

Overall we are happy with sprs despite the sparse matrix multiplication of dense vectors not being parallel.

IvanIsCoding avatar Apr 06 '25 05:04 IvanIsCoding