LinearSolve.jl
LinearSolve.jl copied to clipboard
Handling sparse b with sparse A
Hi,
I have some issues when dealing with sparse-type matrices (from package SparseArrays
)
When I entered the following command, it caused an error:
julia > using SparseArrays
julia > using LinearSolve
julia > A = sprand(10, 10, 0.5)
julia > b = sprand(10, 0.5)
julia > sol = solve(LinearProblem(A , b))
ERROR: MethodError: no method matching ldiv!(::KLU.KLUFactorization{Float64, Int64}, ::SparseVector{Float64, Int64})
I've looked into the definition of ldiv! in KLU source file.
The method ldiv!(::KLU.KLUFactorization{Float64, Int64}, ::SparseVector{Float64, Int64})
seems not matching ldiv!(klu::AbstractKLUFactorization{Tv}, B::StridedVecOrMat{Tv}) where {Tv<: Union{Float64, ComplexF64}}
I also tried:
julia > sol = solve(LinearProblem(A, b), LUFactorization())
ERROR: MethodError: no method matching ldiv!(::SuiteSparse.UMFPACK.UmfpackLU{Float64, Int64}, ::SparseVector{Float64, Int64})
Still not working, did I miss anything?
Or should I solve sparse-type problems in different ways with LinearSolve
(for example: passing custom solver in the documentation)?
Because the case I am dealing with is to solve many large sparse matrices A
(the elements are all in type ComplexF64
but all A
have the same sparse pattern) and b
is always fixed, I really need this package.
All of my package versions are listed below
(@v1.8) pkg> status
[ef3ab10e] KLU v0.3.0
[ba0b0d4f] Krylov v0.8.3
[7ed4a6bd] LinearSolve v1.26.0
[37e2e46d] LinearAlgebra
[2f01184e] SparseArrays
@Wimmerer what happened to all of the ldiv!
methods?
I just find out that for KLU
and UmfpackLU
, they only allow b::Vector{T} where T == eltype(A)
.
They don't allow b
to be SparseVector
That is, the following code works:
julia > using SparseArrays, LinearSolve
julia > A = sprand(10, 10, 0.5)
julia > b = rand(10) # sprand is not available
julia > sol = solve(LinearProblem(A , b))
Does anyone have any idea how to solve this problem without converting the type of b
from SparseVector
to Vector
?
Because I am dealing with the case that dimension of the Matrix is quite large (>=10,000,000
).
Or should I just convert b
to dense Vector everytime ?
Or should I just convert b to dense Vector everytime ?
This is usually what is done. The solution will be dense anyway, most likely.
I don't believe KLU supports sparse RHS. Let me look if we have solvers that do @ChrisRackauckas
Oh, I didn't see the sparse vector part. Yeah, the solution is dense anyways so the conversion would be best. It might be good to just do that automatically?
I can do that, we do a copy anyway, so I'll try to be sure we don't copy twice.