Samyak Jain

Results 1 comments of Samyak Jain

I'm facing the same issue with `torch_dtype=torch.float16` > I found a solution, remove `torch_dtype`, and it should work fine! If `torch_dtype=torch.float16` is removed, the model weights take double the memory...