codellama
codellama copied to clipboard
Inference code for CodeLlama models
There is no information about prereqs of what GPU and memory that is requited for running the models during inference. Please help.
I can't run the examples in Windows machine, currently blocked in this attempts to redirect. We should see information about requirements in the documentation.
I am currently using CodeLlama-7B on an RTX 3090 24GB GPU, and I have a question regarding the relationship between context length and VRAM usage. According to the model documentation,...
I ran the bash script from the command console and received the following message and the download was aborted: Checking checksums md5sum: checklist.chk: no properly formatted checksum lines found Any...
While downloading 13B, the model size is around 12 GB and it is saying that consolidated.00.pth -> OK consolidated.01.pth -> FAILED I am following all the steps as mentioned but...
Hello everyone, To download the Code Llama model, do I first need to fill out a form on the official website to get a download link and license? https://ai.meta.com/resources/models-and-libraries/llama-downloads/ However,...
Does codellama 13B/34B/70B support function calling and Lora fine tuning on multi-run chat with function calling? Are there any instruction pages about this? Thanks a lot.
Hello, I'm fine-tuning using the **CodeLlama-34b** as a base model. During training My loss always shows me 0 with all the datasets. Would someone be able to help me with...
Hi, I have a single GPU on my system and I am using CodeLlama-7b to test my environment. I am running into the following error when I run the sample....
I did not really measure, whether it's infinite (I did hit Ctrl+C before infinity), but it suddenly started to repeat itself: tvali@PC366:~$ ollama run codellama:7b >>> Is LDM a Deep...