stannum
stannum copied to clipboard
Fusing Taichi into PyTorch
Hi ! I'm writing a convolution-like operator using Stannum. It can be used throughout a neural network, meaning each layer may have a different input/output shape. When trying to register...
Now `stannum` (and generally Taichi) cannot do automatic batch as done in PyTorch. For example, the below can only handle 3 arrays, but if we have a batch of arrays,...
With the current Taichi (v0.9.1 - 1.2.1), calling `Tube` N times will result in N^2 time complexity because when creating a field Taichi need to inject kernel information into a...
This issue tracks the breaking change events from the upstream dependencies that break this library
Is this something worth adding? Happy to give it a go. I see this is kind of supported for complex types. Is it preferable to just convert scalar fields to...
Now although `Tin` is subclass of `torch.nn.Module`, we cannot actually save parameters and "model structures" in `Tin` because parameters are values in Taichi fields and "model structures" are actually Taichi...
Now `Tin` and `Tube` are subclasses of `torch.nn.Module` and they can have learnable parameters in the form of values in Taichi fields. However, now these values cannot be optimized by...
As mentioned in README, we have now these limitations: * The registered field with `complex_dtype=True` must be an appropriate `VectorField` or `ScalarField` * If it's `VectorField`, `n` should be `2`,...
Support bridging Taichi to PaddlePaddle with the same features as in PyTorch support
support bridging Taichi and TensorFlow with the same functionalities as in PyTorch support