BentoML icon indicating copy to clipboard operation
BentoML copied to clipboard

bug: numpy to torch.Tensor conversion does not preserve dtype when using np.float16

Open igamenovoer opened this issue 1 year ago • 1 comments

Describe the bug

I recently experienced some dtype mismatch errors when using model.run() with numpy.float16 input, when the pytorch model's dtype is torch.float16. After inspection, I found that the problem is caused by bentoml's type conversion here:

image

in Line 97, it uses torch.Tensor() to perform numpy->torch conversion, which will not preserve float type. I think it should use torch.tensor() instead of torch.Tensor(). See the difference here: image

Replacing with torch.tensor() solves the problem as I tested, though it requires the user to take care of using the correct numpy.dtype.

To reproduce

No response

Expected behavior

No response

Environment

bentoml: 1.1.8 python: 3.10.12

igamenovoer avatar Nov 07 '23 10:11 igamenovoer

Great find. Can you submit a PR to address this? thanks

aarnphm avatar Nov 12 '23 19:11 aarnphm