Ricky Chen
Ricky Chen
This library only contains basic ODE solving capabilities and gradient computations. It's definitely possible to build your own model using these tools, or you might want to look into open...
I don't really see why you need N separate odeint calls. Couldn't you concatenate all of them into a single state? We also allow passing in a tuple of tensors...
Currently the only thing we support is having the event function output a non-scalar tensor, e.g. of shape (batch_size,). The solver will then stop when the first event occurs, and...
Yes, this is on a TODO for now. https://github.com/rtqichen/torchdiffeq/blob/master/torchdiffeq/_impl/adjoint.py#L31 I'd need to get a bit finicky with pytorch's Function. For now, I think the non-adjoint version should support higher-order autodiffs.
Hmm, `odeint` always adds a first dimension that corresponds to `time`. Other than that, you're free to use any ordering you want. The ordering is mostly imposed by how the...
Ah, the parameters of the event function currently need to be part of the state if you want gradients to pass into the event function parameters. I've written a colab...
It sounds like you want to define a loss on multiple values of t values between 0 and event_t? This can be done in a single odeint call: ``` ts...
If you have a minimal working example that can reproduce the memory leak, I can take a look at that. I'm not aware of memory leaks like the one you're...
Hmm. Mean should be easy because you just need to integrate over x(t) (i.e. store in as a part of the state), assuming what you want is something like `1/T...
Oh, how about this. Let `x(t)` follow `dx(t)/dt = f(t, x(t))`. Let `m(t)` be the minimum of `x(t)` in the interval `0` to `t`. Then `m(t)` should have the ODE...