distro-cl icon indicating copy to clipboard operation
distro-cl copied to clipboard

loading torch.CudaTensor model when you only have ClTensor / cltorch

Open roberttwomey opened this issue 8 years ago • 2 comments

Hi Hugh,

Thank you for this great project, I've had very good luck running a number of torch-based projects on my AMD equipped macbook pro. I've run into a road block loading pre-trained models that were trained using cutorch / cunn (for example a pretrained checkpoint from neuraltalk2 or using videoGAN)

Is there any simple way to read a CudaTensor model and save it a ClTensor without at CUDA capable graphics card and cutorch/cunn installed?

Thanks!

Robert

roberttwomey avatar Oct 03 '16 15:10 roberttwomey

I suppose theoretically it should be possible, since it's just a bunch of 1s and 0s in a file. In practice, the options are I suppose:

  • use a CUDA card to load it, then save it out as a FloatTensor (for example), or
  • use a hex editor (or programmatic equivalent) to modify the file, to become a FloatTensor, then load it

Personally, I think:

  • if it's just one tensor, the first option might be the easiest (mind you, installing a cuda ec2 box is non-trivial, if you've never done it before, though I have instructions if you need, to do it in ~10-15 minutes elapsed; about 1-2 minutes of actual work)
  • if you have tons of tensors, I would think the second option is probably not crazy insane. I suspect most of the file is just binary floats, 4 bytes per value, with just a short string at the start that probably says like torchCudaTensor, that you'd have to change to read torch.FloatTensor :-P You might need to adjust some length fields too though . You can see how it saves stuff at https://github.com/torch/torch7/blob/master/File.lua#L107

hughperkins avatar Oct 06 '16 07:10 hughperkins

Looks like this is highly relevant: https://github.com/karpathy/neuraltalk2/blob/master/convert_checkpoint_gpu_to_cpu.lua

ghost avatar Oct 20 '16 19:10 ghost