Agents.jl icon indicating copy to clipboard operation
Agents.jl copied to clipboard

Differentiable ABMs with StochasticAD.jl

Open kavir1698 opened this issue 4 years ago • 15 comments

EDIT: The issue title has been edited to showcase that there is now a formal solution on how to differentiate ABMs, see this comment. Implementing a model that integrates with this as one of our "Integration Examples" would be lovely. :)

Perhaps we can even provide some function(s) that make integration easy, as we did when we were writing the integration example with CellListMap.jl (#659 )


I have tried to use Zygote.jl to differentiate some random and small ABMs. But I get many different errors. Are there data structures that are not supported by Zygote yet? A differentiable ABM would be amazing and allowing to solve many problems.

kavir1698 avatar Mar 26 '20 16:03 kavir1698

I know nothing about zygote, nor the data structures it needs. I can't add anything to the discussion here. :(

p.s.: This issue is not well described. You should consider adding more information about the errors you get, what you are trying to do, a MWE, and more.

Datseris avatar Mar 26 '20 16:03 Datseris

Agent-based model are often very complex and one needs to run many simulations and replicates to explore parameter space and understand the behavior of a model. If we were able to differentiate AMBs with Automatic Differentiation (AD), we would be able to save a lot of time in finding parameter values that best fit data.

I have read about AD in Julia, but I am not deeply familiar with its limitations. I tried to differentiate a simple ABM (Agent.jl's wealth distribution example). But I get the following error. I am not sure what part of the data structure we use are not supported.

using Agents
using Zygote
using Statistics: std

mutable struct WealthAgent <: AbstractAgent
    id::Int
    wealth::Int
end

function wealth_model(;numagents = 100, initwealth = 1)
    model = ABM(WealthAgent, scheduler=random_activation)
    for i in 1:numagents
        add_agent!(model, initwealth)
    end
    return model
end

function agent_step!(agent, model)
    agent.wealth == 0 && return # do nothing
    ragent = random_agent(model)
    agent.wealth -= 1
    ragent.wealth += 1
end

function costWealth(M)
  agent_properties = [:wealth]
  model = wealth_model(numagents=M)
  step!(model, agent_step!, 5)
  std([agent.wealth for agent in values(model.agents)])
end

gradient(N ->costWealth(N), 1000)
ERROR: InexactError: Int64(0.002002002002002002)
Stacktrace:
 [1] Int64 at .\float.jl:709 [inlined]
 [2] convert at .\number.jl:7 [inlined]
 [3] _backvar at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\lib\array.jl:270 
[inlined]
 [4] _backvar at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\lib\array.jl:269 
[inlined]
 [5] #1264 at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\lib\array.jl:274 [inlined]
 [6] (::Zygote.var"#3219#back#1266"{Zygote.var"#1264#1265"{Bool,Colon,Float64,Array{Int64,1},Float64}})(::Float64) at C:\Users\Ali\.julia\packages\ZygoteRules\6nssF\src\adjoint.jl:49
 [7] costWealth at .\REPL[115]:7 [inlined]
 [8] (::typeof(∂(costWealth)))(::Float64) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface2.jl:0
 [9] #17 at .\REPL[119]:1 [inlined]
 [10] (::Zygote.var"#38#39"{typeof(∂(#17))})(::Float64) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface.jl:36
 [11] gradient(::Function, ::Int64, ::Vararg{Int64,N} where N) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface.jl:45
 [12] top-level scope at REPL[119]:1

The error varies depending of the model though.

kavir1698 avatar Mar 26 '20 16:03 kavir1698

I'm getting a core dump when I try this example

julia: /buildworker/worker/package_linux64/build/src/codegen.cpp:4357: jl_cgval_t emit_expr(jl_codectx_t&, jl_value_t*, ssize_t): Assertion `token.V->getType()->isTokenTy()' failed. so I can't help troubleshoot until I figure that out.

I haven't worked with AD in Julia that much myself, but I've never considered using it on discrete variables. What if you changed wealth to Float64?

Libbum avatar Mar 26 '20 21:03 Libbum

Thanks for the reply. Good point. I get the error below with Float64 for wealth. But this error seems more promising to be solvable.

ERROR: Need an adjoint for constructor Base.ValueIterator{Dict{Int64,WealthAgent}}. Gradient is of type Array{Base.RefValue{Any},1}
Stacktrace:
 [1] error(::String) at .\error.jl:33
 [2] (::Zygote.Jnew{Base.ValueIterator{Dict{Int64,WealthAgent}},Nothing,false})(::Array{Base.RefValue{Any},1}) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\lib\lib.jl:294
 [3] (::Zygote.var"#378#back#196"{Zygote.Jnew{Base.ValueIterator{Dict{Int64,WealthAgent}},Nothing,false}})(::Array{Base.RefValue{Any},1}) at C:\Users\Ali\.julia\packages\ZygoteRules\6nssF\src\adjoint.jl:49
 [4] ValueIterator at .\abstractdict.jl:44 [inlined]
 [5] (::typeof(∂(Base.ValueIterator{Dict{Int64,WealthAgent}})))(::Array{Base.RefValue{Any},1}) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface2.jl:0
 [6] ValueIterator at .\abstractdict.jl:44 [inlined]
 [7] (::typeof(∂(Base.ValueIterator)))(::Array{Base.RefValue{Any},1}) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface2.jl:0
 [8] values at .\abstractdict.jl:123 [inlined]
 [9] (::typeof(∂(values)))(::Array{Base.RefValue{Any},1}) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface2.jl:0
 [10] costWealth at .\REPL[8]:5 [inlined]
 [11] (::typeof(∂(costWealth)))(::Float64) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface2.jl:0
 [12] #6 at .\REPL[10]:1 [inlined]
 [13] (::Zygote.var"#38#39"{typeof(∂(#6))})(::Float64) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface.jl:36
 [14] gradient(::Function, ::Int64, ::Vararg{Int64,N} where N) at C:\Users\Ali\.julia\packages\Zygote\KNUTW\src\compiler\interface.jl:45
 [15] top-level scope at REPL[10]:1

kavir1698 avatar Mar 26 '20 22:03 kavir1698

I'm going to try and dig into this a little deeper. Completely agree it would be good to have this working. Luckily I've had a need to start working with Zygote myself recently, so hopefully we can get this done for v3

Libbum avatar Mar 29 '20 19:03 Libbum

You could attempt to get the strong solution of some models, but in many ABMs the discreteness is inherently non-differentiable through AD, and you'd have to move to the continuous master equation PDE to get a differentiable form.

ChrisRackauckas avatar Apr 03 '20 00:04 ChrisRackauckas

Thank you, @ChrisRackauckas , for your reply. It is great to have the input of an expert on this issue. Is is possible to know when the discreteness is not a problem? And if not through AD, could numerical differentiation be applied to ABMs?

kavir1698 avatar Apr 03 '20 07:04 kavir1698

this one might be relevant to this thread "Differentiable Agent-Based Simulation for Gradient-Guided Simulation-Based Optimization" https://arxiv.org/pdf/2103.12476.pdf

nunezmatias avatar Aug 06 '22 17:08 nunezmatias

That isn't a very great way to do it. That smoothing can introduce a lot of bias and has exponential cost scaling.

We have a paper that will be out really soon that shows how to do this, along with a new AD in Julia that implements the method. I was wrong two years ago that it isn't possible: you just need an entirely different definition of automatic differentiation.

ChrisRackauckas avatar Aug 06 '22 19:08 ChrisRackauckas

@ChrisRackauckas this sounds very interesting. Is your paper out yet? keen to see how you work around sampling from discrete distributions w/o reparameterizing. Only other way that comes to mind is policy gradients, which isn't really a true gradient..

ayushchopra96 avatar Oct 11 '22 13:10 ayushchopra96

Remind me tomorrow. The library should be out by the end of the week.

ChrisRackauckas avatar Oct 11 '22 13:10 ChrisRackauckas

Remind me tomorrow. The library should be out by the end of the week.

Reminder :)

arnauqb avatar Oct 12 '22 11:10 arnauqb

Here it is: https://twitter.com/ChrisRackauckas/status/1582324437285625857

https://arxiv.org/abs/2210.08572

Reopen the issue 😄

ChrisRackauckas avatar Oct 18 '22 11:10 ChrisRackauckas

Implementing a model that integrates with this as one of our "Integration Examples" would be lovely. :)

Perhaps we can even provide some function(s) that make integration easy, as we did when we were writing the integration example with CellListMap.jl (#659 )

Datseris avatar Oct 18 '22 13:10 Datseris

Interesting @ChrisRackauckas. We did similar work with the reparameterization trick to enable differentiable ABMs. Then, we also demonstrate how you can calibrate such large differentiable ABMs with deep neural networks (~10 million agents). https://arxiv.org/abs/2207.09714. Also, at MIT - I would love to chat further and share notes!

ayushchopra96 avatar Oct 18 '22 17:10 ayushchopra96