pytorch
pytorch copied to clipboard
Extend Tensor Dimension support from 8 to 32
🚀 The feature, motivation and pitch
Pytorch supports tensor dimensions of up to size 25.
Comments from slack thread:
J Jiang 42 minutes ago Because our tensor rank is instantiated via template constant argument. So there's a physical limit to that.
Naoya Maruyama 40 minutes ago There should be no fundamental limit. As Jie mentioned, higher dimensional tensors would require more spaces, but that should be it.
J Jiang 39 minutes ago Yeah, we can support higher rank tensors, but there would still be a limit set at compile time :slightly_smiling_face:
Naoya Maruyama 31 minutes ago Yeah, higher ranks would mean an increased size of the size and stride vectors, so the parameter size of kernel launches would increase in our current implementation, which could mean launch failures. We could pass those data through constant memory.
J Jiang 29 minutes ago I vaguely remember that we break fusion into smaller kernel when looking at argument sizes... Don't remember if it's done at TorchScript level or FusionSegmenter... But it should be a workable problem.
Christian Sarofeen 25 minutes ago All of this is workable/reworkable. Will leave it up to @Mike Ruberry to let us know how important/urgent he thinks this is.
Mike Ruberry 24 minutes ago Not an urgent concern
Naoya Maruyama 24 minutes ago Related issue: https://github.com/csarofeen/pytorch/issues/1993 #1993 codegen error: parameter space overflow 4096 bytes allowed
Christian Sarofeen 24 minutes ago Adding one or two more dimensions seems pretty easy, getting to the full size seems a bit trickier.
Christian Sarofeen 23 minutes ago If we do decide to tackle this a good target would be to remove the tensor parameters directly but decompose them into what's actually used as many stride/size parameters probably aren't.
Christian Sarofeen 22 minutes ago The initial design was primarily around convenience.