llama
llama copied to clipboard
Inference code for Llama models
In what will surprise no-one, the llama weights have already been leaked on torrent sites. I just did a search for it. Therefore any bad-actors will already be able to...
Connecting to dobf1k6cxlizq.cloudfront.net (dobf1k6cxlizq.cloudfront.net)|13.226.237.67|:443... connected. HTTP request sent, awaiting response... 403 Forbidden 2023-03-05 18:04:37 ERROR 403: Forbidden.
I am downloading the model using mac pro intel chip version using iterminal. When I run a few different command: 1)./download.sh 2)`brew --prefix bash`/bin/bash ./download.sh I get error: Checking checksums...
I'm trying to run the 7B model on an rtx 3090 (24gb) on WSL Ubuntu but I'm getting the following error: ``` jawgboi@DESKTOP-SLIQCDH:~/git/llama$ torchrun --nproc_per_node 1 example.py --ckpt_dir "./model/7B" --tokenizer_path...
I am able to get sensible output by running 7B on 1x24Gb GPU with MP 1. ``` (llama) user@e9242bd8ac2c:~/llama$ CUDA_VISIBLE_DEVICES="0,1" torchrun --nproc_per_node 1 example.py --ckpt_dir checkpoints/7B --tokenizer_path checkpoints/tokenizer.model > initializing...
I have successfully downloaded the 7B,13B,30B models. When I download the 65B model, I successfully downloaded 0-4 consolidated pth, but failed in 5-th and following 6,7,8th checkpoint. Here is the...
Hello! I really want to test out the 7b model. Is there any option to offload it to ram? My GPU is a rtx 3070ti with 8gb vram and I...
I get a status code 403 (Forbidden) response on trying to download the consolidated.01.pth file for the 7B model. For all other files, I get 200 (OK).