SciMLOperators.jl
SciMLOperators.jl copied to clipboard
Lazy concatenation / block diagonal forms of operators
Does there exist functionality analogous to those listed in https://julialinearalgebra.github.io/LinearMaps.jl/stable/types/#BlockMap-and-BlockDiagonalMap?
In particular, lazy hcat/vcat/hvcat/diagonal op formation, etc.
We should also revisit this line of code: https://github.com/SciML/SciMLOperators.jl/blob/31275d4e2b2f3671fb4c2811656dd555c4e1f24a/src/matrix.jl#L116
No, we don't have that functionality yet. It is on my to-do list. Would make writing vector-calculus operators really convenient. And will be very useful for solving coupled PDE systems.
Potential API:
abstract type Abstract BlockOperator{T} <: AbstractSciMLOperator{T} end
###
# Block Operator
###
struct BlockOperator{T, Ti, Tb}
Mblocks:Ti
Nblocks::Ti
blocks::Tb
size_info
function BlockOperator(Mblocks::Integer, Mblocks::Integer, blocks)
@assert Mblocks * Nblocks == length(blocks)
size_consistent = true
for
# loop over rows, cols, to ensure size consistency.
end
@assert size_consistent
T = promote_type(eltype.(blocks)...)
BlockOperator{T, typeof(M), typeof(blocks)}(Mblocks, Nblocks, blocks)
end
end
function BlockOperator(A::AbstractVecOrMat{<:AbstractSciMLOperator})
BlockOperator(size(A)..., vec(A))
end
function Base.:*(L::BlockOperator, u::AbstractVecOrMat)
end
L = BlockOperator([A1, A2])
###
# Block Diagonal
###
struct BlockDiagonal{T, Tb}
blocks::Tb
function BlockDiagonal(blocks... :: AbstractSciMLOperator)
T = promote_type(eltype.(blocks)...)
BlockDiagonal{T, typeof(blocks)}(blocks)
end
end
function Base.:*(L::BlockOperator, u::AbstractVecOrMat)
vcat(Tuple(B * u for B in L.blocks)...)
end
if we support sparse block patterns, then block diagonal becomes a special case
Yeah for sparse patterns, we can have a constructor that accepts pairs (L, (m, n))
. And a special constructor for BlockDiagonal
.
It might make sense to implement BlockDiagonal
as its own class so we can define has_ldiv
, has_ldiv!
on it when each block is invertible.
Can always do const BlockDiagonal = BlockOperator{AA} where {AA<:Diagonal}
, with AA
imagined to be the block pattern
Also want to be mindful of code dup with kronecker products. $I \otimes A$ is a block diagonal matrix, and $A \otimes B = (A \otimes I)(I \otimes B)$, although that might not be the best perspective for efficiency
I remember putting in short-circuits in TensorProductOperator
for when the outer is IdentityOperator
. But we can always just write an overload if our lazy block version is faster.
BlockOperators
also need to support lazy matrix algebra. So we can multiply blocks and compose their components. Example
G = BlockOperator([Dx, Dy])
T = DiagonalBlock([Tx, Ty])
Lapl = G' * T * G # assume sizes match
Lapl == Dx' * Tx * Dx + Dy' * Ty * Dy # true
so we need to overload
Base.:*(A::AbstractBlockOperator, B::AbstractBlockOperator)
linking old block matrix issue here for completeness. https://github.com/SciML/SciMLOperators.jl/issues/45
some inspiration: https://github.com/JuliaLinearAlgebra/LinearMaps.jl/blob/master/src/blockmap.jl