QuantEcon.jl icon indicating copy to clipboard operation
QuantEcon.jl copied to clipboard

MarkovChain: state_values

Open oyamad opened this issue 7 years ago • 1 comments

  • Add a method to set state_values afterwards
    Once mc is created by mc = MarkovChain(p), then the state_values field has type UnitRange. Currently, it is not allowed to change it to, say, an Array of state values. (This is relevant in particular for DPSolveResult.mc.)

  • Allow state_values to be AbstractArray?
    When state values are 2-dimensional for example, the state space may be represented by a Matrix, but it is not accepted, because of TV<:AbstractVector where state_values::TV.

oyamad avatar Sep 18 '17 06:09 oyamad

In response to the first point, I would prefer to have the MarkovChain type be immutable. Instead of changing the state values later on, I would encourage users to simply construct a new MarkovChain instance using the original transition matrix and supplying new state values.

I'd be open to allowing the state_values to be an AbstractArray, with the number of elements in the first dimension equaling the number of rows and columns of the transition matrix. We'd have to think carefully (and probably prototype) to make sure that things like simulation are still seamless/natural if we can't make the assumption of AbstractVector anymore. An alternative to using Matrix{T} you could use Vector{NTuple{N,T}} where N is the number of columns the Matrix would have. There are tradeoffs there too...

sglyon avatar Oct 06 '17 03:10 sglyon