ATen
ATen copied to clipboard
Method to allocate CUDA tensors on specific device
PyTorch CUDA tensor constructors have an undocumented keyword argument device
which allows you to specify what GPU device the tensor should be allocated on. Looking at Type
in ATen (the documented method for allocating tensors), there does not seem to be any way to specify the device when allocating new tensors this way. There should be!
@zdevito I think it will be useful to have this feature available. We stumbled upon one use case today when using ATen outside pytorch