flambeau
flambeau copied to clipboard
Nim bindings to libtorch
- wraps a few more things: - `tanh`, `tanh_mut` - `deviceCount` - `TensorDataset`, `toDataset` and updates the torch downloader to get the latest version by default (1.10.2 libtorch with CUDA...
Title.
The `hasCuda` procedure, which should return whether the linked Torch library was compiled with CUDA support, does not live in the global Torch namespace. In the Torch documentation that procedure...
Right now we use `lent UncheckedArray` instead of `ptr UncheckedArray` in some places. And at the moment `lent` is handled behind the scenes with pointers but from [this](https://discord.com/channels/371759389889003530/755344160592101389/801595200987856917) conversation with...
Currently we need to inline C++ code due to Nim not being able to generate C++ code with default value or wrap types without default constructors https://github.com/nim-lang/Nim/issues/4687 This leads to...
[RFC] Should we keep dimensions (like Arraymancer) or squezze dimensions (like Pytorch) by default
When we for example slice a Tensor or perform a reduction along an axis we have two options: either to keep the dimensions of the original Tensor even if its...
Right now there are a few tests in https://github.com/SciNim/flambeau/blob/master/tests/arraymancerTestSuite/tensor/test_accessors.nim that fail. Most notably: - No bounds-checking - Expression like `a[1, 1] += 10` isn't possible because `a[1, 1]` is immutable.
Unlike regular PyTorch/LibTorch, torchvision, torchtext and torchaudio do not have a nice prebuilt executable that we can download. So we need to either: - get the library from conda or...
Currently we use a mix of dynamic linking with `passL: -lc10 -ltorch_cpu` and static linking with {.link.} We want to cleanly use dynamic linking for starter as the linker step...