BentoML
BentoML copied to clipboard
bug: numpy to torch.Tensor conversion does not preserve dtype when using np.float16
Describe the bug
I recently experienced some dtype mismatch errors when using model.run() with numpy.float16 input, when the pytorch model's dtype is torch.float16. After inspection, I found that the problem is caused by bentoml's type conversion here:
in Line 97, it uses torch.Tensor() to perform numpy->torch conversion, which will not preserve float type. I think it should use torch.tensor() instead of torch.Tensor(). See the difference here:
Replacing with torch.tensor() solves the problem as I tested, though it requires the user to take care of using the correct numpy.dtype.
To reproduce
No response
Expected behavior
No response
Environment
bentoml: 1.1.8 python: 3.10.12
Great find. Can you submit a PR to address this? thanks