codellama
codellama copied to clipboard
Inference code for CodeLlama models
Motivation: Thanks for creating this repository . There is an ongoing effort planned to collaborate from Intel GPU to enable out of the box runtime functionality of code llama on...
Morning! I need help getting the models to run a _second_ time, on a new instance. Yesterday, I registered for and downloaded the models onto an AWS sagemaker instance. Everything...
I want to know what's the minimum requirement memory/CPU/GPU for each model to run relatively fast. I ran in my M1 ``` torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \...
I followed the instructions, and I was unable to run it under Windows 10 due to `nccl`
Cmd line: `torchrun --nproc_per_node 1 example_infilling.py --ckpt_dir CodeLlama-7b-Instruct/ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model --max_seq_len 512 -- max_batch_size 4` Error Raised ``` > initializing model parallel with size 1 > initializing ddp with size...
WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your...
I'm trying to use the exemple inference on windows 10 with python 10, like that: `(py310) d:\git\codellama>torchrun --nproc_per_node 1 example_instructions.py --ckpt_dir CodeLlama-7b-Instruct/ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model --max_seq_len 512 --max_batch_size 4` But it...
**Changes :** Prerequisite Checks: Added a function check_prerequisites to verify if wget and md5sum are installed. It also offers to install these packages if they are missing. Log Function: Introduced...
Am trying to finetune codellama with the same idea of llama2 and using the same script to finetune. Am not sure whether am right as the repo or blog not...
I hope this message finds you well. I recently had the opportunity to experiment with the Codellama-7b-Instruct model from GitHub repository and was pleased to observe its promising performance. Encouraged...