Question: What is the best practice for working with PyTorch?
Hi PyJulia team,
I'm just getting started with porting a Julia package to a Python module. On the Python side, all the interaction will be with PyTorch tensors.
I noticed that one can pass numpy arrays and lists to Julia, which will then be cast to the built-in Array type. AFAIK, this is not possible with PyTorch tensors. A simple workaround would be to convert the tensors from torch to numpy (and the reverse). However, I am curious if there is a more elegant way. How can I control this mapping?
Best regards, Fabio
I believe any Python array that uses the buffer protocol should be wrappable by Julia without copying. From the Julia perspective, we just need the dimensions, (d)type, and pointer for the data structure. We could then use Base.unsafe_wrap or its equivalent.
https://docs.julialang.org/en/v1/base/c/#Base.unsafe_wrap-Union{Tuple{N},%20Tuple{T},%20Tuple{Union{Type{Array},%20Type{Array{T}},%20Type{Array{T,%20N}}},%20Ptr{T},%20Tuple{Vararg{Int64,%20N}}}}%20where%20{T,%20N}
It is unclear to me what the status is of that in pytorch. The following issue appears relevant. https://github.com/pytorch/pytorch/issues/19143
Thank you @mkitti for the hints and explanation. I'll have a look at them.