QuantEcon.jl
QuantEcon.jl copied to clipboard
MarkovChain: Respect sparsity
In the current implementation of MarkovChain
, if the input is a sparse matrix, it is converted to dense in simulation
methods. The following is a possible approach to keeping sparsity of the input matrix.
type SparseMarkovChain{T,Ti<:Integer,TV<:AbstractVector}
P::SparseMatrixCSC{T,Ti}
n::Int
rowptr::Vector{Ti}
colval::Vector{Ti}
cdfs::Vector{T}
state_values::TV
end
In the constructor of SparseMarkovChain
, create rowptr
, colval
, and nzval
from the input SparseMatrixCSC
by using csr2csc
from SparseMatrices.jl, and generate cdfs
as in the Python version.
(@spencerlyon2 What is the status of SparseMatrices.jl?)
Extend the DiscreteRV
:
type DiscreteRV{T1<:AbstractVector,T2<:AbstractVector,TV<:AbstractVector}
pdf::Nullable{T1}
cdf::T2
values::TV
end
draw(d::DiscreteRV) = d.values[searchsortedfirst(d.Q, rand())]
In simulate
methods, use DiscreteRV
as follows:
function simulate!(mc::SparseMarkovChain{T,Ti}, X::Matrix{Ti})
P_dist = [
DiscreteRV(
Nullable(),
sub(mc.cdfs, mc.rowptr[i]:mc.rowptr[i+1]-1),
sub(mc.colval, mc.rowptr[i]:mc.rowptr[i+1]-1),
) for i in 1:mc.n
]
...
end
For computation of stationary distributions, we can implement an iterative method such as Successive Over-Relaxation (SOR), for which CSC is more appropriate.
@oyamad interesting proposal. I would really like to be able to preserve the sparsity of the chain when simulating.
I've done a bit more work with SparseMatrices.jl that I don't believe I have published. I would love to be able to flesh that project out a bit more, but I simply haven't had the time.
I think the proposed extension to DiscreteRV sounds good. I'd move the pdf::Nullable
field to the end and fill it with a default value like this:
type DiscreteRV{T1<:AbstractVector,TV<:AbstractVector,T2<:AbstractVector}
cdf::T1
values::TV
pdf::Nullable{T2}
end
function DiscreteRV{T1<:AbstractVector,TV<:AbstractVector}(
cdf::T1, values::TV=1:length(cdf),
)
DiscreteRV(cdf, values, Nullable{Vector{Float64}}())
end