ao
ao copied to clipboard
Tensor subclass boilerplate can be consolidated
A lot of code for tensor subclasses can likely be conslidated together into a base class that other classes can utilize
_get_to_kwargs: https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L64 https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L276 (also needed for https://github.com/pytorch/ao/blob/main/torchao/quantization/autoquant.py#L40)
to: https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L594 https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L290 https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L423 (also needed for https://github.com/pytorch/ao/blob/main/torchao/quantization/autoquant.py#L40)
_apply_fn_to_data: https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L432
detach: https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L444
a default repr would be nice (it caused bugs in the past)
if these were implemented using tensor_flatten and tensor_unflatten e.g. https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py#L412-L421 you could write a general mixin/parent class that handles the general form of those methods