Muhammad Firmansyah Kasim
Muhammad Firmansyah Kasim
Hi again, I have found this repository is also useful to me. One thing that would be ideal to have is the differential algebraic equations (DAE) solver (http://www.scholarpedia.org/article/Differential-algebraic_equations), at least...
I am trying to include a functional in another torch.nn.Module (for a hypernetwork implementation), but it's currently a bit inconvenient to do that. Here's an example code: ```python import torch,...
**Describe the bug** `xitorch.optimize.rootfinder` fails with a simple case. **To Reproduce** ```python import torch from xitorch.optimize import rootfinder def fcn(x: torch.Tensor, r: torch.Tensor) -> torch.Tensor: res0 = x[0] - x[1]...
**Describe the bug** Upgrading the pytorch to 1.10 makes some of the test fails (while it succeeds in 1.9 & 1.8). **To Reproduce** Steps to reproduce the behavior: 1. Install...
- [x] `svd` (taking the `n` lowest or uppermost decomposition, just like `symeig`) - [ ] `solve` - [x] `(method="cg")` - [x] `(method="bicgstab")` - [ ] `(method="gmres")` (#10) - [...
- [ ] `rootfinder` and `equilibrium` (can take a look at [SciPy](https://github.com/scipy/scipy/blob/914523af3bc03fe7bf61f621363fca27e97ca1d6/scipy/optimize/nonlin.py#L688), good first issue): - [x] `(method="broyden1")` - [x] `(method="broyden2")` - [x] `(method="anderson")` - [ ] `(method="diagbroyden")` - [x]...
- `gridinterp1` (interpolation where the input points are given on a grid) - [ ] `method="cspline"` (cubic spline) - [ ] `method="linear"` - `interp1` (interpolation where input points are not...
- [ ] `solve_ivp` - [x] `(method="rk45")` (RK4(5) with adaptive step size) - [ ] `(method="dop853")` (can take a look at SciPy, good first issue) - [ ] `(method="bdf")` -...
Fixes #92043. I'm following numpy's implementation as suggested by @min-jean-cho. I found out that this implementation still produces overflow if we're working with numbers greater than `finfo.max / 2`, but...
## 🚀 Feature An option to set gradients of unused inputs to zeros instead of `None` in `torch.autograd.grad`. Probably something like: `torch.autograd.grad(outputs, inputs, ..., zero_grad_unused=False)` where `zero_grad_unused` will be ignored...