DiffEqFlux.jl icon indicating copy to clipboard operation
DiffEqFlux.jl copied to clipboard

Pre-built implicit layer architectures with O(1) backprop, GPUs, and stiff+non-stiff DE solvers, demonstrating scientific machine learning (SciML) and physics-informed machine learning methods

Results 90 DiffEqFlux.jl issues
Sort by recently updated
recently updated
newest added

## Checklist - [ ] Appropriate tests were added - [ ] Any code changes were done in a way that does not break public API - [ ] All...

This pull request changes the compat entry for the `OptimizationPolyalgorithms` package from `0.2` to `0.2, 0.3` for package docs. This keeps the compat entries for earlier versions. Note: I have...

This pull request changes the compat entry for the `Optimization` package from `3.9` to `3.9, 4` for package docs. This keeps the compat entries for earlier versions. Note: I have...

This pull request changes the compat entry for the `OptimizationOptimisers` package from `0.2` to `0.2, 0.3` for package docs. This keeps the compat entries for earlier versions. Note: I have...

This pull request changes the compat entry for the `OptimizationOptimJL` package from `0.2, 0.3` to `0.2, 0.3, 0.4` for package docs. This keeps the compat entries for earlier versions. Note:...

## Checklist - [NO] Appropriate tests were added - [YES] Any code changes were done in a way that does not break public API - [-] All documentation related to...

- In [Weather forecasting example,](https://docs.sciml.ai/DiffEqFlux/stable/examples/neural_ode_weather_forecast/#Weather-forecasting-with-neural-ODEs) you choose the `sum(abs2)` as the loss function, but in [Sebastian Callh personal blog](https://sebastiancallh.github.io/), he use the `Flux.mse` as the loss function. And the difference...

question

This pull request changes the compat entry for the `ForwardDiff` package from `0.10` to `0.10, 1` for package docs. This keeps the compat entries for earlier versions. Note: I have...

## Checklist - [ ] Appropriate tests were added - [ ] Any code changes were done in a way that does not break public API - [ ] All...

I am new to neural differential equations and have been going through some tutorials to better understand them. I noticed that in Python's [Diffrax tutorial](https://docs.kidger.site/diffrax/examples/neural_ode/), they use a batching scheme...

question