llama
llama copied to clipboard
torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
Hi,
I was trying to run llama2 in my local computer (Windows 10, 64 GB RAM, GPU 0 Intel(R) Iris (R) Xe Graphics). Got following error -
-
raise RuntimeError("Distributed package doesn't have NCCL built in") Resolved by import torch torch.distributed.init_process_group("gloo")
-
torch._C._cuda_setDevice(device) AttributeError: module 'torch._C' has no attribute '_cuda_setDevice' Resolved by commenting out if device >= 0: torch._C.cuda_setDevice(device) in \torch\cuda_init.py
-
TypeError: type torch.cuda.HalfTensor not available. Torch not compiled with CUDA enabled.
What should I do know? Is it even possible to make llama work in a computer with Intel GPU?
I don't think we've ever tested with Intel GPUs so i am not surprised if doesn't work.
Btw this looks like more of a PyTorch compatibility issue than one for Llama. Might be worth it to transfer this issue to pytorch/pytorch where they may be able to be of more help..
Hi,
I was trying to run llama2 in my local computer (Windows 10, 64 GB RAM, GPU 0 Intel(R) Iris (R) Xe Graphics). Got following error -
- raise RuntimeError("Distributed package doesn't have NCCL built in") Resolved by import torch torch.distributed.init_process_group("gloo")
- torch._C._cuda_setDevice(device) AttributeError: module 'torch._C' has no attribute '_cuda_setDevice' Resolved by commenting out if device >= 0: torch.C.cuda_setDevice(device) in \torch\cuda__init.py
- TypeError: type torch.cuda.HalfTensor not available. Torch not compiled with CUDA enabled.
What should I do know? Is it even possible to make llama work in a computer with Intel GPU?
Thanks for solving the first two points. it works for me. I am using MacOS with Intel Core and was facing the same issue.
@JishnuChoudhury did you contacted PyTorch? Where you able to solve the third issue in the meanwhile?
@WieMaKa no I did not contact Pytorch. Ended up using Lamacpp for quantized models of llama2 on CPU.
Hi, I am having the same issue but running on a Mac M1.
I've followed all the install steps (pip install -e ., ./download.sh, pip install -r requirements.txt) and downloaded successfully the model llama-2-7b-chat. However, same error while running the cmd below:
torchrun --nproc_per_node 1 example_chat_completion.py
--ckpt_dir llama-2-7b-chat/
--tokenizer_path tokenizer.model
--max_seq_len 512 --max_batch_size 6
Any advise?
Thanks, Alexandre
same issue here
same issue
Hi, I am having the same issue but running on a Mac M1.
I've followed all the install steps (pip install -e ., ./download.sh, pip install -r requirements.txt) and downloaded successfully the model llama-2-7b-chat. However, same error while running the cmd below:
torchrun --nproc_per_node 1 example_chat_completion.py --ckpt_dir llama-2-7b-chat/ --tokenizer_path tokenizer.model --max_seq_len 512 --max_batch_size 6
Any advise?
Thanks, Alexandre
when you're installing the model 7B, does the git window that you copy your emailed url into just disappear instantly? mine does.
same issue