Daniel Falbel
Daniel Falbel
Great!! Perhaps something equivalent to the below for ROCM is missing? https://github.com/mlverse/torch/blob/fef4bf086c9fa4c5420997c04f01190cb4594d5d/lantern/CMakeLists.txt#L192
It seems that setting this would help: https://cmake.org/cmake/help/latest/prop_tgt/HIP_STANDARD.html
I don't think `torch::apply` is equivalent to `std::apply`... I think torch::apply is equivalent to https://pytorch.org/docs/stable/generated/torch.Tensor.apply_.html while std::apply is metaprogramming stuff from C++ https://en.cppreference.com/w/cpp/utility/apply `std::apply` is a C++17 feature, so that...
That's great progress!! 👍 Hmm, this seems to be related to the clang version, perhaps? Or something like this?
Unfortunatelly there's currently there's no way of doing it directly from R. You can potentially `jit_trace` your model, load it in PyTorch and export it to ONNX.
@sebffischer Thanks for reporting! That's really unexpected. Indeed when casting numeric `NA` values to tensors we get a `nan` value, eg: ``` torch_tensor(NA_real_) ``` However it seems something is lost...
Hi @MaximilianPi , Thanks for the detailed benchmarks! That's nice! Is suspect that starting from that point we are calling GC in every backward iteration and thus adding a large...
Hi @rdinnager, Thanks for reporting. That's weird, I'd assume this is a mismatch between the torch version and the safetensors versions, as at some point I think I saw some...
ohhh, I think that might be the case. You are right, you might need to downgrade safetensors or use the dev version of torch. I'm going to make a new...
Our implementation of `jit_compile` is a very thin wraper around LibTorch's `torch::jit::compile` and I am not sure I fully understand how it works internally. I opened: https://discuss.pytorch.org/t/fork-and-wait-within-libtorchs-torch-compile/183892 to hopefully understand...