Error when using FP16 or Mixed precision
When I train the model of dtype fp16 or mixed precision on V100 sxm2 16G, loss.backward() return error:
python: /project/lib/Analysis/Allocation.cpp:40: std::pair<llvm::SmallVector
Triton doens't support V100 very well
Triton doens't support V100 very well
Thanks for reply, and is there any solution or alternative of Triton?
There's a reference implementation in pytorch but would probably be quite a bit slower
Hi @tridao, just to confirm and for documentation, we had the same error on an older Laptop GPU and changing back to full-precission resolved the issue for us.
Cheers, Sascha