cambrian
cambrian copied to clipboard
Error in inference.py when multiple GPUs are available. [BUG]
My server has 8 GPUs and when running
python inference.py
It can load all models, but when input with image and question it raises an error with:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
But when I specify only one GPU with
CUDA_VISIBLE_DEVICES=0 python inference.py
It works well.
Does this script only work with a single GPU?