codellama icon indicating copy to clipboard operation
codellama copied to clipboard

Run code llama on mac?

Open mauermbq opened this issue 1 year ago • 18 comments

Hi,

on mac I got the following error: RuntimeError: Distributed package doesn't have NCCL built in raise RuntimeError("Distributed package doesn't have NCCL " "built in") RuntimeError: Distributed package doesn't have NCCL built in ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 80731) of binary: /opt/dev/miniconda3/envs/llama/bin/python3.10

Guess this is because of the missing CUDA. Is there an option to run it with CPU?

mauermbq avatar Aug 24 '23 21:08 mauermbq

+1 just went down this rabbit hole for a bit -- closest thing I found to helping here: https://github.com/facebookresearch/llama/commit/9a5670bef6d5a57faa3c408935c5fded020b94eb

lostmygithubaccount avatar Aug 24 '23 23:08 lostmygithubaccount

I've sent a PR for running CodeLlama on mac: https://github.com/facebookresearch/codellama/pull/18

davideuler avatar Aug 25 '23 04:08 davideuler

David, does this work on M2 macbooks ? If so, I'll patch it.

EDIT: I just applied that PR patch, since mine is M2 - I went with lostmygithubaccount's reference. Also patched it so the WORLD_SIZE count matched the mp count.

Finally made it work with Code Llama 34B model !!!! As soon as it began running, everything froze and my laptop crashed. I heard some weird noises from my dear computer. I'm not coming back here again, GPT4 is good for everything

lol

sdfgsdfgd avatar Aug 25 '23 06:08 sdfgsdfgd

+1 just went down this rabbit hole for a bit -- closest thing I found to helping here: facebookresearch/llama@9a5670b

yep, this brought a step further: ther is still another problem: RuntimeErrorRuntimeError: : ProcessGroupGloo::allgather: invalid tensor type at index 0 (expected TensorOptions(dtype=c10::Half, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)), got TensorOptions(dtype=c10::Half, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))ProcessGroupGloo::allgather: invalid tensor type at index 0 (expected TensorOptions(dtype=c10::Half, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)), got TensorOptions(dtype=c10::Half, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 55764) of binary: /opt/dev/miniconda3/envs/llama/bin/python3.10

mauermbq avatar Aug 25 '23 08:08 mauermbq

David, does this work on M2 macbooks ? If so, I'll patch it.

EDIT: I just applied that PR patch, since mine is M2 - I went with lostmygithubaccount's reference. Also patched it so the WORLD_SIZE count matched the mp count.

Finally made it work with Code Llama 34B model !!!! As soon as it began running, everything froze and my laptop crashed. I heard some weird noises from my dear computer. I'm not coming back here again, GPT4 is good for everything

lol

I have no M2 on my hand. I tested it on my Mac M1 Ultra, and it works. Not sure if it works on m2. So far as I know it should be compatible. And I haven't test the PR on cuda, it will be a great job if anyone could help to test the PR on cuda.

davideuler avatar Aug 25 '23 16:08 davideuler

the PR does work on M2, at least the 7b model. I was having trouble w/ the 13b and 34b with the mp count and world_size setting, not sure what I was doing wrong

lostmygithubaccount avatar Aug 28 '23 16:08 lostmygithubaccount

Can confirm the fix from @davideuler works on my M2 Macbook Air, running the 7b-Instruct model.

brianirish avatar Aug 28 '23 18:08 brianirish

Verified that the solution provided by @davideuler is effective on my M1 MacBook Pro using the 7b model. However, the performance is notably sluggish. Is it possible to run it using GPU acceleration? It runs so fast with GPU acceleration by llama.cpp

foolyoghurt avatar Aug 29 '23 16:08 foolyoghurt

+1 just went down this rabbit hole for a bit -- closest thing I found to helping here: facebookresearch/llama@9a5670b

yep, this brought a step further: ther is still another problem: RuntimeErrorRuntimeError: : ProcessGroupGloo::allgather: invalid tensor type at index 0 (expected TensorOptions(dtype=c10::Half, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)), got TensorOptions(dtype=c10::Half, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))ProcessGroupGloo::allgather: invalid tensor type at index 0 (expected TensorOptions(dtype=c10::Half, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)), got TensorOptions(dtype=c10::Half, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 55764) of binary: /opt/dev/miniconda3/envs/llama/bin/python3.10

I had the same issue. can anybody provide any help?

liqiang28 avatar Sep 01 '23 03:09 liqiang28

I had the same issue. can anybody provide any help?

did you try the PR at https://github.com/facebookresearch/codellama/pull/18? it should work for 7b at least

lostmygithubaccount avatar Sep 01 '23 04:09 lostmygithubaccount

while 34b is useless with reasoning, 7b generates almost relevant code. I could probably write a 10 liner py script that generates snippets with almost same success. Would have been cool to get 34b running though. 7b is extremely useless, why wont 34b run on mac

sdfgsdfgd avatar Sep 01 '23 04:09 sdfgsdfgd

34b freezes on my m1 mac

binoculars avatar Sep 02 '23 21:09 binoculars

34b freezes on my m1 mac

Can you please guide me how to run 13B and 34B model on Windows? I have single GPU and hence able to run 7B model whose Model parallel value=1. 13B model requires MP value=2 but I have only 1 GPU on which I want to to inference, what changes should I make in code and in which file so that I can run 13B model?

manoj21192 avatar Sep 05 '23 05:09 manoj21192

I had the same issue. can anybody provide any help?

did you try the PR at #18? it should work for 7b at least

I tried the PR at #18 but I used 13b-instruct, should I change the model to 7b ?

liqiang28 avatar Sep 06 '23 02:09 liqiang28

@liqiang28 7b should work with that PR, I haven't been able to get any larger models to work

lostmygithubaccount avatar Sep 06 '23 18:09 lostmygithubaccount

Verified that the solution provided by @davideuler is effective on my M1 MacBook Pro using the 7b model. However, the performance is notably sluggish. Is it possible to run it using GPU acceleration? It runs so fast with GPU acceleration by llama.cpp

@foolyoghurt Out of curiosity, what's your token per second ? I'm experiencing the sluggish performance as well.

DavidLuong98 avatar Sep 07 '23 01:09 DavidLuong98

@liqiang28 7b should work with that PR, I haven't been able to get any larger models to work

Yes, it can work after I changed the model to 7B, thanks a lot

liqiang28 avatar Sep 07 '23 02:09 liqiang28

+1 just went down this rabbit hole for a bit -- closest thing I found to helping here: facebookresearch/llama@9a5670b

yep, this brought a step further: ther is still another problem: RuntimeErrorRuntimeError: : ProcessGroupGloo::allgather: invalid tensor type at index 0 (expected TensorOptions(dtype=c10::Half, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)), got TensorOptions(dtype=c10::Half, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))ProcessGroupGloo::allgather: invalid tensor type at index 0 (expected TensorOptions(dtype=c10::Half, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)), got TensorOptions(dtype=c10::Half, device=mps:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)))

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 55764) of binary: /opt/dev/miniconda3/envs/llama/bin/python3.10

I have similar issue above, any fix?

robinsonmhj avatar Nov 29 '23 01:11 robinsonmhj