Mohamed Tarek
Mohamed Tarek
@mfherbst When you say you get spurious zero eigenvalues, why are they spurious? If the residuals' norms `result.residual_norms` are less than the tolerance, i.e. `result.converged` is true, then maybe your...
Julia uses LAPACK by default for the supported number types. Any funky, e.g. high precision, number type though will dispatch to a fallback implementation in pure Julia. Both seem scale-invariant.
Ok I found the source of "instability". The problem is that I am performing inner orthogonalization of `R` and `P` but not the full matrix `[X P R]` from which...
Try this ```julia using LinearAlgebra function LinearAlgebra.ldiv!(c::AbstractArray{T}, P::KMPLapSolver, b::AbstractArray{T}) where {T} c .= P(b) return c end ``` before calling `lobpcg`.
Note that it is possible to prove that `X[:,i]`, `P[:,i]` and `R[:,i]` will be independent for each `i` in exact arithmetic unless exact convergence happens, i.e. `X[:,i]` doesn't change at...
I can provide a `PreconditionerFunction` in Preconditioners.jl to do this automatically.
> @mohamed82008 how the spurious output has passed the tolerance check at the end? @lobpcg `X` is updated using `X = [X P R] * V` where `V` is the...
@lobpcg your Matlab code is LGPL licensed. The code here is MIT licensed so I was careful not to adapt your Matlab version of the code for legal purposes. I...
Here is an example https://github.com/JuliaLinearAlgebra/IterativeSolvers.jl/blob/master/test/lobpcg.jl#L220.
>Comparing with the docs for scipy.sparse.linalg.lobpcg (here) it looks like constraints is expecting an N x n_constraints array (just a few concatenated column vectors), where the target operator is N...