SparseArrays.jl
SparseArrays.jl copied to clipboard
`\` on Symmetric{Float32, SparseMatrixCSC{Float32, Int64} fails
julia> a = sprand(Float64, 10, 10, 0.2);
julia> a = a'*a + I;
julia> Symmetric(a) \ rand(10);
julia> a = sprand(Float32, 10, 10, 0.2);
julia> a = a'*a + I;
julia> Symmetric(a) \ rand(Float32, 10);
ERROR: MethodError: no method matching lufact!(::SparseMatrixCSC{Float32,Int64}, ::Val{true})
Closest candidates are:
lufact!(::Union{DenseArray{T<:Union{Complex{Float32}, Complex{Float64}, Float32, Float64},2}, Base.ReinterpretArray{T<:Union{Complex{Float32}, Complex{Float64}, Float32, Float64},2,S,A} where S, Base.ReshapedArray{T<:Union{Complex{Float32}, Complex{Float64}, Float32, Float64},2,A,MI} where MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N} where N} where A<:Union{SubArray{T,N,P,I,true} where I<:Tuple{Union{Base.Slice, UnitRange},Vararg{Any,N} where N} where P where N where T, DenseArray}, SubArray{T<:Union{Complex{Float32}, Complex{Float64}, Float32, Float64},2,A,I,L} where L} where I<:Tuple{Vararg{Union{Int64, AbstractRange{Int64}, Base.AbstractCartesianIndex},N} where N} where A<:Union{Base.ReshapedArray{T,N,A,MI} where MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N} where N} where A<:Union{SubArray{T,N,P,I,true} where I<:Tuple{Union{Base.Slice, UnitRange},Vararg{Any,N} where N} where P where N where T, DenseArray} where N where T, DenseArray}, ::Union{Val{true}, Val{false}}) where T<:Union{Complex{Float32}, Complex{Float64}, Float32, Float64} at linalg\lu.jl:16
lufact!(::Union{Hermitian{T,S}, Symmetric{T,S}} where S where T, ::Union{Val{true}, Val{false}}) at linalg\lu.jl:23
lufact!(::Union{DenseArray{T,2}, Base.ReinterpretArray{T,2,S,A} where S, Base.ReshapedArray{T,2,A,MI} where MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N} where N} where A<:Union{SubArray{T,N,P,I,true} where I<:Tuple{Union{Base.Slice, UnitRange},Vararg{Any,N} where N} where P where N where T, DenseArray}, SubArray{T,2,A,I,L} where L} where I<:Tuple{Vararg{Union{Int64, AbstractRange{Int64}, Base.AbstractCartesianIndex},N} where N} where A<:Union{Base.ReshapedArray{T,N,A,MI} where MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N} where N} where A<:Union{SubArray{T,N,P,I,true} where I<:Tuple{Union{Base.Slice, UnitRange},Vararg{Any,N} where N} where P where N where T, DenseArray} where N where T, DenseArray} where T, ::Union{Val{true}, Val{false}}) at linalg\lu.jl:64
...
Stacktrace:
[1] lufact!(::Symmetric{Float32,SparseMatrixCSC{Float32,Int64}}, ::Val{true}) at .\linalg\lu.jl:24
[2] lufact at .\linalg\lu.jl:115 [inlined] (repeats 2 times)
[3] \(::Symmetric{Float32,SparseMatrixCSC{Float32,Int64}}, ::Array{Float32,1}) at .\linalg\generic.jl:882
[4] top-level scope
julia> versioninfo()
Julia Version 0.7.0-DEV.3234
Commit 2cc82d29e1* (2018-01-02 11:44 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, skylake)
This is a limitation in SuiteSparse. See https://github.com/PetterS/SuiteSparse/blob/27e5a8516464a6ac40bd3fa0e5b46e51b11f4765/CHOLMOD/Include/cholmod_internal.h#L175-L176. We'd have to either wait for CHOLMOD to be extended or to implemenet our own sparse Cholesky solver which would be a significant task.
Should this maybe print an error explaining that?
Just an error for the Float32 case should suffice. Not an upstream issue.
@andreasnoack, I'm confused by your comment. Why can't we compile both dlong and slong versions of cholmod, according to the comment you linked?
Oh, I see, the comment says only the first two are currently supported. So we'd have to patch CHOLMOD to make it work?
As @andreasnoack said, the single-precision case is not supported by CHOLMOD. So the best thing may be to error out.
@Wimmerer Will #380 provide this?
@ViralBShah no, the changes just allow Int32 indices anywhere (and Int64). Nothing new about Float32, it's all doubles all the way down.
We can just do the conversions here? Not crazy expensive.
Yes, we should just do the conversion. Maybe we issue a warning?
A warning seems a bit much, it's at most a performance issue right? I guess the conversion back to Float32 could be funky.
Maybe documentation is sufficient?
It is documented, https://github.com/JuliaSparse/SparseArrays.jl/blob/81457592fb8435153e2f5c87322dcf62afdf995e/src/solvers/cholmod.jl#L1185-L1189
Before I close this let me check that we do the conversion automatically
https://github.com/JuliaSparse/SparseArrays.jl/pull/487 will fix this. Closing as duplicate of #483.
This now works on Julia master.