codellama icon indicating copy to clipboard operation
codellama copied to clipboard

Inference code for CodeLlama models

Results 109 codellama issues
Sort by recently updated
recently updated
newest added

Motivation: Thanks for creating this repository . There is an ongoing effort planned to collaborate from Intel GPU to enable out of the box runtime functionality of code llama on...

cla signed

Morning! I need help getting the models to run a _second_ time, on a new instance. Yesterday, I registered for and downloaded the models onto an AWS sagemaker instance. Everything...

model-usage

I want to know what's the minimum requirement memory/CPU/GPU for each model to run relatively fast. I ran in my M1 ``` torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \...

documentation
question

I followed the instructions, and I was unable to run it under Windows 10 due to `nccl`

compability

Cmd line: `torchrun --nproc_per_node 1 example_infilling.py --ckpt_dir CodeLlama-7b-Instruct/ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model --max_seq_len 512 -- max_batch_size 4` Error Raised ``` > initializing model parallel with size 1 > initializing ddp with size...

model-usage

WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your...

model-usage

I'm trying to use the exemple inference on windows 10 with python 10, like that: `(py310) d:\git\codellama>torchrun --nproc_per_node 1 example_instructions.py --ckpt_dir CodeLlama-7b-Instruct/ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model --max_seq_len 512 --max_batch_size 4` But it...

model-usage

**Changes :** Prerequisite Checks: Added a function check_prerequisites to verify if wget and md5sum are installed. It also offers to install these packages if they are missing. Log Function: Introduced...

enhancement
download-install

Am trying to finetune codellama with the same idea of llama2 and using the same script to finetune. Am not sure whether am right as the repo or blog not...

model-usage
fine-tuning

I hope this message finds you well. I recently had the opportunity to experiment with the Codellama-7b-Instruct model from GitHub repository and was pleased to observe its promising performance. Encouraged...

fine-tuning