Feat symmetric eigen
This adds prototypes for algorithms 6.1 & 6.2 in http://arxiv.org/abs/1903.08538; there are many, many optimizations to pursue.
For now, the speed of the spectral decompositions is highly dependent on the speed of indexing of individual entries of the operator(s). But it's fun to play with Mathieu-like systems such as those below. This script shows that some perturbed negative Laplacian eigenproblems (posed on SinSpace(), for the trivial symmetry) can be decomposed in linear time.
S = SinSpace()
D = Derivative(S)
p1 = Fun(CosSpace(), [1.0, 0.25, 0.0625])
p0 = Fun(CosSpace(), [0.0,0.5,1.0])
L = (-D)*(p1*D) + p0
n = 500
λ1 = eigvals(Symmetric(L[1:2n,1:2n]))[1:n];
λ2 = ApproxFun.symmetric_eigvals(L, n);
norm((λ1-λ2)./λ1, Inf)
L = -D^2 + p0
julia> for ν in (100, 1000, 10_000, 100_000)
@time ApproxFun.symmetric_eigen(L, ν);
end
0.001041 seconds (2.31 k allocations: 244.703 KiB)
0.006339 seconds (28.63 k allocations: 1.832 MiB)
0.049872 seconds (369.68 k allocations: 18.091 MiB, 5.63% gc time)
0.492796 seconds (3.77 M allocations: 177.982 MiB, 22.45% gc time)
100,000 eigenvalues in half a second 🐢.
This looks great! But all the matrix stuff should be in its own package: ApproxFun is too big already and can't handle any more unit tests.
(We may need to divide it into ApproxFunBase, ApproxFunOrthogonalPolynomials, ApproxFunFourier, … to get over this.)
Yep, I agree.
They are all structured (symmetric) low-rank perturbations of symmetric banded matrices. Should they go in BandedMatrices.jl or a derived repository, like BandedPlusLowRankMatrices.jl?
https://github.com/kuanxu/AlmostBandedMatrices.jl
An AlmostBandedMatrix is a sum of a BandedMatrix and a SemiSeparableMatrix. I wonder if we can just do Symmetric{T,AlmostBandedMatrix{T}}?
Interesting, looks like that repository could be the right place, though we may have to discuss the interface. For example, what is an almost-banded matrix? If B is banded and J is the exchange matrix, is B + J almost-banded?
Also, a bulge can be chased off the bands more efficiently than if it were literally typed as a low-rank matrix, even if it may be viewed as a low-rank perturbation. So maybe this calls for updating that repository with traits??
Why not a BlockSkylineMatrix?
I was thinking that block banded matrices would lead to fast spectral solutions of Schrödinger eigenproblems on the sphere
Sorry, I should have said SkylineMatrix not BlockSkylineMatrix. Though its not clear what the "natural" storage for that is.
One storage scheme for skyline matrices is by a field containing contiguous memory and another containing integers that denote the indices of the beginning of each row/column of data. http://www.netlib.org/utk/people/JackDongarra/etemplates/node378.html. This is great because it is general enough to include banded matrices plus bulges or spikes (arrowheads). But what about inverting the Cholesky factor of a SymBlockArrowhead? We know this can be done in-place, but I don't think this is true of a general SkylineMatrix. Maybe just the action of the inverse Cholesky factors is necessary.
A first stab at refining SymBandedPlusBulge might be with a banded matrix of twice the bandwidth.
To be honest it’s probably best to use a banded matrix since then you can call LAPACK
Though I guess we can decompose a block skyline matrix into a banded blocks and then use LAPACK