DynamicalSystemsBase.jl
DynamicalSystemsBase.jl copied to clipboard
Discrete systems with memory (e.g. AR-N systems)
Describe the feature you'd like to have
See discussion here. It would be nice to have a dedicated implementation of DiscreteDynamicalSystemWithMemory
, which could iterate systems where the next state depends on n
previous time steps. This would allow defining for example autoregressive systems.
In CausalityTools.jl, I circumvent the lack of this type by defining each system as a struct
that allocates memory containers that are updated as the system is iterated. For systems with stochastic components, these containers also store a reference to a random number generator, so that trajectories are reproducible.
If possible, sketch out an implementation strategy
Implement the approach described above, somehow. I'll revisit this when I'm done with the CausalityTools v2.0 release and after application deadlines.
Isn't this an N
-state markov chain? Or, that's the most general way to define these systems? That's what we should implement probably.
Here's an example of how I do it in CausalityTools.jl. There are 3 state variables, each depending on states a maximum of 2 time steps back. Hence, past_states
is an SVector{3, MVector{2}}
. An instance of Henon3
can then be passed as a parameter container to DiscreteDynamicalSystem
, and past_states
is updated using the update_states!
function.
struct Henon3{P, T, S, A, B, C} <: LaggedDiscreteDefinition{P}
past_states::P
xi::S
a::A
b::B
c::C
function Henon3(; a::A = 1.4, b::B = 0.3, c::C = 0.1,
xi::S = [0.4, 0.5, 0.6]) where {A, B, C, S}
T = eltype(1.0)
m₁ = MVector{2, T}(repeat([xi[1]], 2))
m₂ = MVector{2, T}(repeat([xi[2]], 2))
m₃ = MVector{2, T}(repeat([xi[3]], 2))
past_states = SVector{3, MVector{2, T}}(m₁, m₂, m₃)
P = typeof(past_states)
return new{P, T, S, A, B, C}(past_states, xi, a, b, c)
end
end
function system(definition::Henon3)
return DiscreteDynamicalSystem(eom_henon3, definition.xi, definition)
end
function eom_henon3(u, p::Henon3, t)
# `u` is simply ignored here, because the state is stored in the memory vectors
m₁, m₂, m₃ = p.past_states
x₁₁, x₁₂ = m₁[1], m₁[2]
x₂₁, x₂₂ = m₂[1], m₂[2]
x₃₁, x₃₂ = m₃[1], m₃[2]
a, b, c = p.a, p.b, p.c
dx₁= a - x₁₁^2 + b*x₁₂
dx₂= a - c*x₁₁*x₂₁ - (1 - c)*x₂₁^2 + b*x₂₂
dx₃= a - c*x₂₁*x₃₁ - (1 - c)*x₃₁^2 + b*x₃₂
new_state = SVector{3}(dx₁, dx₂, dx₃)
update_states!(p, new_state) # Update memory vectors
return new_state
end
Isn't this an N-state markov chain? Or, that's the most general way to define these systems? That's what we should implement probably
From the top of my head, I see three cases that need to be covered:
- Deterministic lagged discrete dynamical systems. The
Henon3
example above is an example. - The same as (1), but with stochastic components.
- Purely stochastic systems, which would be Markov chains
Perhaps there is a way to unify all of these, but I suspect that premature generalization is much more time consuming than just covering each case separately with its own concrete subtype of DynamicalSystem
Another example which is deterministic + stochastic:
Base.@kwdef struct Logistic2Unidir{V, C, R1, R2, Σy, R} <: DiscreteDefinition
xi::V = [0.5, 0.5]
c_xy::C = 0.1
r₁::R1 = 3.78
r₂::R2 = 3.66
σ_xy::Σy = 0.05
rng::R = Random.default_rng()
end
function eom_logistic2uni(u, p::Logistic2Unidir, t)
(; xi, c_xy, r₁, r₂, σ_xy, rng) = p
x, y = u
f_xy = (y + (c_xy*(x + σ_xy * rand(rng))/2) ) / (1 + (c_xy/2)*(1+σ_xy))
dx = r₁ * x * (1 - x)
dy = r₂ * (f_xy) * (1 - f_xy)
return SVector{2}(dx, dy)
end
# Initialises a `DiscreteDynamicalSystem`
function system(definition::Logistic2Unidir)
return DiscreteDynamicalSystem(eom_logistic2uni, definition.xi, definition)
end