Santosh Gupta
Santosh Gupta
Yes, as long as the data is saved in numpy format, data gadget can open it, or your could transfer live data onto there. If you give me a colab...
> Thank you for replying! I can load files by using speedtorch now. But cupy doesn't support multi-thread, so I have to modify the thread from 8 to 1, after...
How many cores does that CPU have? I can't seem to look it up. I'm not too familiar with that model. But with a colab notebook perhaps I can tinker...
Hmm, I'm not sure what's going on in the code, it doesn't seem to be related to speedtorch
> Joining the question.. > > The project seems to give really nice gains in terms of performance, and I'm trying to use it at an inference pipeline I've created....
Maybe you can open the data in the speedtorch data gadget, and use that to send to the gpu. But opening the data in speedtorch may take a big longer...
Cupy is somehow messy when it comes to checking if it's already installed. So if it's already installed on your system, it might try to reinstall it. For this reason,...
This makes sense. I initially didn't know why pinned cupy tensors were getting faster performance, but a pytorch engineer pointed it out, I updated the 'How it works?' section but...
>quad-core ARM A57 64-bit CPU So this would have 4 cores? >So it seems the best use case for SpeedTorch is CPUGPU transfer of slices of big tensors? Yes, but...
Another approach you might want to consider is using the PyCuda and Numba indexing kernals, using a similar approach, disguising CPU pinned tensors as GPU tensors. I didn't have a...