Matrix{Int} \ Vector{Float32} is type-unstable
julia> eltype(rand(Int, 1, 1) \ rand(Float32, 1))
Float32
julia> eltype(rand(Int, 2,2) \ rand(Float32, 2))
Float64
This is because it checks if the matrix is triangular/diagonal (which it always is for n=1). I'm not sure how to fix this without copies.
Is this on master? I get Float64 for your first test and @code_warntype is green.
Nevermind, @code_warntype does yield Union{Vector{Float32}, Vector{Float64}}, sorry for the noise. I still get Float64 on the first test though.
This begs the question: if it were type-stable, what type should it be?
- If
Float32, then we have to decide whether to do intermediate factorization withFloat64and add a conversion toFloat32(incurs a copy) or do everything withFloat32, which could be bad for precision (is that considered breaking?). - If
Float64, then we have to decide whetherUpperTriangular(rand(Int, 2, 2)) \ rand(Float32, 2)should returnFloat32as it currently does and add a conversion toFloat64(incurs a copy) or directly returnFloat64(is that considered breaking?). Same forLowerTriangular.
This last option does not necessarily incurs a copy if an adequate type can be put here (and all relevant occurrences) : https://github.com/JuliaLang/julia/blob/09e7a05e81c92b19bd970a92e881dda759ad5fc4/stdlib/LinearAlgebra/src/triangular.jl#L1650-L1651 but that part of the code is sensitive so I don't know how doable that would be.
To me, when you have float32 it means you really want to use float32 and not float64, so factoring A in float32 seems reasonable... Note also that rand(Int,2,2) \ rand(BigFloat, 2) currently factors A in Float64, so the precision argument is not really consistent.
I don't have a strong opinion on what the type should be, but it really does seem like the types in these cases should be the same if at all possible.