fblagojevic

Results 1 issues of fblagojevic

Llama example works fine when run with 2 GPUs: torchrun --nproc-per-node 2 pippy_llama.py output: ['know', 'think', 'you', 'be', 'getting', 'great', 'favorite', 'right'] However, the example hangs when run 4 or...