yokie121

Results 3 comments of yokie121

> AssertionError: Loading a checkpoint for MP=0 but world size is 1 ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1769) of binary: /usr/bin/python3 I have the problem .

it's greate !! we want to know how to work on the larger models 13B or 65B. ? thanks !

You can run vanilla-llama on 1, 2, 4, 8 or 100 GPUs. i want to know how to do it ? thanks