Daniel Falbel
Daniel Falbel
If you installed from CRAN, GPU code wouuldn't work because colab doesn't have a compatible version of CUDA. That URL you are creating doesn't really need to exist. You need...
Hi @k-bingcai , I was not able to reproduce the issue. It might be an uncompatibility between the cuda version and torch. Can you post your `sessionInfo()`? As well as...
I'm pretty sure the problem is caused by a ABI compatibility issue between CUDA11 (used by torch) and CUDA12 that you have installed on that environment. I suggest you to...
With the pre-built binaries the globally installed cuda version doesn't matter, as the correct version is shipped within the package. That's actually a similar approach to what pytorch does.
I think this would be nice, but I'm not sure how to deal with sparse mutidimensional tensors with length(dim) > 2. Treating 2D tensors specially can cause problems for more...
We have a maybe special use case where we launch IPyKernel from a background thread, eg: ``` import threading import time import asyncio import sys from ipykernel.kernelapp import IPKernelApp old_stdout,...
@minrk Indeed, this works for me. I have a local proof of concept. Happy to submit a patch if you think its worth it.
Unfortunatelly, LibTorch API and ABI is not backward compatible, so we always need to tweak our code before supporting a new version of LibTorch. This means that we can't really...
So you get a warning with? ``` options(timeout = 600) kind
Interesting, could you paste the output of `nvidia-smi` in that machine?