LinearAlgebra.jl
LinearAlgebra.jl copied to clipboard
eigvecs(A::Hermitian, eigvals) method?
Currently, we implement eigvecs(A, eigvals) only for real SymTridiagonal matrices, but it would be easy to extend this to Hermitian matrices via a Hessenberg factorization as explained in this discourse post
using LinearAlgebra
import LinearAlgebra: eigvecs, RealHermSymComplexHerm
function eigvecs(A::RealHermSymComplexHerm, eigvals::AbstractVector{<:Real})
F = hessenberg(A) # transform to SymTridiagonal form
X = eigvecs(F.H, eigvals)
return F.Q * X # transform eigvecs of F.H back to eigvecs of A
end
A quick benchmark with LinearAlgebra.BLAS.set_num_threads(1) shows that it is significantly faster than eigvecs(A) at computing a single eigenvector, for example (slightly faster than using eigen(A, k:k) if you already have the eigenvalues):
julia> A = hermitianpart!(randn(1000,1000));
julia> λ = @btime eigvals($A);
69.372 ms (21 allocations: 7.99 MiB)
julia> Q = @btime eigvecs($A);
212.195 ms (25 allocations: 23.25 MiB)
julia> Q[:,17] ≈ @btime eigvecs($A, [λ[17]])
54.133 ms (73 allocations: 7.99 MiB)
true
julia> @btime eigen($A, 17:17); # computing 17th eigvec and eigval by eigen
58.106 ms (24 allocations: 15.62 MiB)
Since the implementation above is already working, it should be an easy PR for someone to add it with a test and updated documentation.
Note, however, that, if you are simply calling eigvals separately, it's even faster to perform the Hessenberg factorization yourself and use it for both eigvals and eigvecs, since this factorization is what takes most of the time in eigen:
julia> F = @btime hessenberg(A);
54.186 ms (17 allocations: 7.90 MiB)
julia> λ = @btime eigvals($(F.H));
13.623 ms (9 allocations: 31.45 KiB)
julia> Q[:,17] ≈ @btime $(F.Q) * eigvecs($(F.H), [λ[17]])
379.409 μs (57 allocations: 95.38 KiB)
true
For my application (that post in julia discourse), the ideal API style is
val, vec = eigenmin(A) # return the smallest eigenvalue, and an associated eigenvector
, which resembles the existing API
vals, vecs = eigen(A)
Things are a bit unexpected for me. https://github.com/JuliaLang/LinearAlgebra.jl/blob/1ce842652c07b33289046236b09bc59bad43dbeb/src/eigen.jl#L431C1-L438C4
function eigmin(A::Union{Number, AbstractMatrix};
permute::Bool=true, scale::Bool=true)
v = eigvals(A, permute = permute, scale = scale)
if eltype(v)<:Complex
throw(DomainError(A, "`A` cannot have complex eigenvalues."))
end
minimum(v)
end
This seems to be the LinearAlgebra.jl's realization of the function.
I guess, shouldn't there be a more specialized algorithm to perform eigmin such that it is more efficient?
Otherwise this API is superfluous---users can call minimum themselves.
Because the lack of API currently, I could only do
import LinearAlgebra
function trivial_my_eigmin(A::LinearAlgebra.Symmetric)
vals, vecs = LinearAlgebra.eigen(A)
valmin, colmin = findmin(vals)
vecmin = vecs[:, colmin]
return valmin, vecmin
end
# test code begins
A = LinearAlgebra.SymTridiagonal([1.; 2.; 1.], [2.; 3.])
A = LinearAlgebra.Symmetric(Matrix(A))
valmin, vecmin = trivial_my_eigmin(A) # 🔴 an efficient substitute API is desired here
# Since A is not PSD, the following fact is true theoretically (practically with tolerance)
@assert transpose(vecmin) * A * vecmin == valmin < 0 "maybe due to tolerance"
If there is any substitute invented, please tell me (I guess I can be notified by any replies here), thank you.
For the minimum eigenvalue and eigenvector of a Hermitian matrix, just use eigen(A, 1:1).
Things are a bit unexpected for me.
That method is for general matrices, where eigmin doesn't have a good algorithm. A specialized algorithm is used for Hermitian matrices, and also for eigen(A, 1:1) (which only works for Hermitian matrices).
Considering what you suggests, it seems that eigmin is indeed superfluous? We can just use eigen(A, 1:1). Or, use eigen(A) in conjunction with minimum as a fallback choice which is no worse that eigmin. (right?)