Owen L
Owen L
That's a good example. I got some good examples working, but they also rely on more complex neural network energy functions (which just adds up in terms of LoC), at...
I'm actually revisiting your paper (https://arxiv.org/pdf/2405.06464), and I'm curious how these methods can have such a high strong order but have basically the same ESS as constant Euler? Is ESS/sampling...
"In other words, while the solution is stable, the importance of the initial condition (or past errors) vanishes exponentially." This is generally true of samplers right? Like exponential convergence away...
> Thanks for the report – this issue is tracked in #8755 > > To be quite honest, we're not putting much effort into the `jax.experimental.sparse` code these days, so...
A few points based on my understanding of states. First is that the substate doesn't do anything here (since you get the substate based on the layers, which already contain...
Ah yes, that's good, I forgot you don't need to call `.update` since the substate wasn't meaningful (and can therefore make it more elegant scaning over both). Thanks for the...
Interestingly, if you change the t's to match along (via ```python @jax.jit def step_lorenz(x_curr, _): # numerical error solution = diffeqsolve( TERM, SOLVER, t0=0.0, t1=DISCRETE_DT, dt0=DISCRETE_DT, y0=x_curr ) num_rejected_steps =...
Yea, I'm not totally sure. I might start by trying just a `solver.step` approach and if that also differs then it's in the solver, else it's because diffeqsolve might be...
Maybe this belongs in diffrax, but since the core code is equinox bounded while loop (and DirectAdjoint is a pretty thin layer over them) I put it here
I think this would be a good idea, even sort of simple techniques (e.g. basis encoding) might be useful especially for those coming from classical machine learning. There are a...