Refactor PyTorch 64-Bit Data Type Conversion
🌱 Describe your Feature Request
When we look at how CoreMLTools PyTorch converter deal with PyTorch data types in utils.py
NUM_TO_TORCH_DTYPE = {
...
2: torch.int16,
3: torch.int32,
4: torch.int32,
5: torch.float16,
6: torch.float32,
7: torch.float32,
...
}
It is super confusing to see both 3 and 4 map to int32 and both 6 and 7 map to float32. It took me a while to guess the reason probably is due to CoreML does not support 64-bit data type
If CoreML won't support 64-bit data type any time soon, we should probably refactor such confusing workaround: we can replace 64-bit dtype with 32-bit dtype every time 64-bit dtype occurs in an op translation
How can this feature be used?
It would help contributors to understand the PyTorch converter, and less prune to bugs due to misunderstanding this hacky work around
Describe alternatives you've considered
If CoreML supports 64-bit data type, then simply update to 4: torch.int32 and 7: torch.float64 will be good
Made changes in the following pull request (https://github.com/apple/coremltools/pull/2171). https://pastes.dev/4BCMTrVIqa
Made changes in the following pull request (#2171). https://pastes.dev/4BCMTrVIqa
https://github.com/apple/coremltools/pull/2171#issuecomment-2021590537
This issue should be closed. Related pull request closed https://github.com/apple/coremltools/pull/2171#issuecomment-2085785869 @YifanShenSZ