llama icon indicating copy to clipboard operation
llama copied to clipboard

Inference code for Llama models

Results 412 llama issues
Sort by recently updated
recently updated
newest added

Hi Community, I was able to run the example.py for 13B model and see a result with two T4 GPU (16GPU) using the torchrun ``` torchrun --nproc_per_node 2 example.py --ckpt_dir...

here is the link, and the weights are not locatable of course https://huggingface.co/spaces/chansung/LLaMA-7B

``` (venv) D:\Downloads\LLaMA>torchrun --nproc_per_node 2 example.py --ckpt_dir models/13B --tokenizer_path models/tokenizer.model NOTE: Redirects are currently not supported in Windows or MacOs. WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to...

Could someone please be so kind as to help me? I received an email with a URL, but I'm not sure how to download the contents. I have limited knowledge...

I am running `torchrun --nproc_per_node 1 example.py --ckpt_dir ./7B/ --tokenizer_path ./tokenizer.model` and my output is ``` NOTE: Redirects are currently not supported in Windows or MacOs. Traceback (most recent call...

The current download script gives error when executed on Mac. download.sh: line 10: 7B: value too great for base (error token is "7B") download.sh: line 11: 13B: value too great...

CLA Signed

Creating the TARGET_FOLDER before downloading the tokenizer, otherwise if the TARGET_FOLDER does not exist the download of the tokenizer fails.

CLA Signed

Added an option for non-Linux users to select between using `wget` or `curl` to download files using this script. To use `curl`, simply include the `-v curl ` flag when...

CLA Signed