Torch.NET icon indicating copy to clipboard operation
Torch.NET copied to clipboard

Passing data from cuda array

Open solarflarefx opened this issue 5 years ago • 9 comments

I know this is a project in development, but I wanted to know if it is currently possible to pass in data from a Cuda array to a model loaded in the GPU, without having to do any copies to/from the CPU.

solarflarefx avatar Dec 16 '19 17:12 solarflarefx

if that is possible in Python I see no reason why it shouldn't work in Torch.NET.

henon avatar Dec 16 '19 18:12 henon

So this would involve converting a cuda array into a pytorch tensor that resides in the GPU. I believe this can be done in Python, but am not 100% sure. Also, from the main page it looks like torch.cuda has not been wrapped yet, though the description above does say tensors can reside in the GPU with this wrapper.

Have you tried converting a cuda array to a pytorch gpu equivalent?

solarflarefx avatar Dec 22 '19 22:12 solarflarefx

No I haven't. But since Pythonnet is directly calling into Python, everything that can be done in Python is possible with Pythonnet too.

Sadly I wasn't able to finish Torch.NET, it is currently in a half finished state. The problem is, I have too much workload at the moment to continue working on it.

henon avatar Dec 23 '19 09:12 henon

Sorry this is a point of confusion on my part -- on the main page it says that torch.Tensor has been wrapped but torch.cuda has not. However, at the top it says that tensors can be created on the GPU and GPU operations can be performed on them.

I looked up a little bit more on tensors and more specifically GPU tensors: https://stackoverflow.com/questions/53628940/differences-between-torch-tensor-and-torch-cuda-tensor

In the link I referenced, it says: "As you can see here that a tensor which is moved to GPU is actually a tensor of type: torch.cuda.*Tensor i.e. torch.cuda.FloatTensor.

So cpu_tensor.to(device) or torch.Tensor([1., 2.], device='cuda') will actually return a tensor of type torch.cuda.FloatTensor."

So does this mean that the tensor creation and allocation parts of torch.cuda have been wrapped but certain elements have not? Is there an example of this?

Ultimately I want to take a model I trained in Python and load it up into .NET for inference, with the model being loaded onto the GPU and tensors being allocated to the GPU. Is this possible with Torch.NET?

If this is possible, what format should I export my model in from Python? It looks like torch.onnx has not yet been wrapped so I should probably avoid ONNX. Is there a particular saved model format that would work? Would it be advisable to rebuild the whole model in Torch.NET?

solarflarefx avatar Jan 02 '20 23:01 solarflarefx

Ok, GPU Tensors are definitely available at this point.

As for loading vs rebuilding of models I would say it would be best not to rebuild the model in Torch.NET just load it. Please investigate more what you need for that and if it is only a couple of functions I'll try to generate them if they are not yet available which I guess the aren't.

Best would be if you checked out Torch.NET, looked at the unit tests and provide tests (which of course won't compile or fail) so that I know exactly what is needed and can work most efficiently. What do you think?

henon avatar Jan 03 '20 12:01 henon

Sounds good, I will experiment with Torch.NET and get back to you. It may take me a little time as I have been experimenting with both TensorFlow and PyTorch, and the model I am currently interested in testing with was last trained in TensorFlow, so I will have to first load it into PyTorch, save the model, and then use in Torch.NET. Yes I know, a bit convoluted, but I think this is the only way to work with Torch.NET (unless you have further insight).

solarflarefx avatar Jan 03 '20 14:01 solarflarefx

So after doing some research, I think the function in torch C++ is the following: https://pytorch.org/cppdocs/api/function_namespacetorch_1ad7fb2a7759ef8c9443b489ddde494787.html

torch::from_blob

I think this is ultimately what I am looking for to take data already in the GPU and use it as a tensor input for a DL model.

Is this currently implemented in Torch.NET?

solarflarefx avatar Jan 27 '20 01:01 solarflarefx

No, not yet. Here is an example of a typical unit test in Torch.NET

        [TestMethod]
        public void new_tensorTest()
        {
            // >>> tensor = torch.ones((2,), dtype=torch.int8)
            // >>> data = [[0, 1], [2, 3]]
            // >>> tensor.new_tensor(data)
            // tensor([[ 0,  1],
            //         [ 2,  3]], dtype=torch.int8)
            // 

            var tensor = torch.ones(new Shape(2), dtype: torch.int8);
            var data = new int[,] { { 0, 1 }, { 2, 3 } };
            var given = tensor.new_tensor(data);
            var expected =
                "tensor([[0, 1],\n" +
                "        [2, 3]], dtype=torch.int8)";
            Assert.AreEqual(expected, given.repr);
        }

As you can see at the top there are python commands and their output on the interactive python console. If you could set up a simple test case like this (even if you can only provide the console commands) it would be a lot of help to get the missing functionality done and working.

henon avatar Jan 27 '20 11:01 henon

It's possible in libtorch. See https://stackoverflow.com/questions/77390607/how-to-convert-a-cudaarray-to-a-torch-tensor

GF-Huang avatar Feb 19 '24 01:02 GF-Huang