codellama
codellama copied to clipboard
Inference code for CodeLlama models
I was doing an inference work using codellama-2-7B. Here is my code: ``` inputs_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(self.device) generate_ids=model.generate(inputs_ids,max_new_tokens=1024,num_return_sequences=1,pad_token_id=tokenizer.eos_token_id) output = tokenizer.decode(generate_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) ``` I want to know the maximum...
## Problem Description After completing setup for CodeLlama, from the [README.md](https://github.com/facebookresearch/codellama), when I attempt to run any of the examples, with the specified commands: ``` torchrun --nproc_per_node 1 example_completion.py --ckpt_dir...
I have trying to host the Code Llama from Hugging Face locally and trying to run it. It runs soley on CPU and it is not utilizing GPU available in...
I want to add some tokens like `[BOST]` to the tokenizer so that it does not split these. How can I achieve this? Any suggestions are welcome. Huggingface provides functions...
I understand this might be a huggingface-related problem but I cannot find the answer anywhere so I come to ask for help. On huggingface there is a example code for...
I use 7b and 13b-instruct model to do program analysis task. When I use original format 7b-Instruct and 13b-Instruct model, following `example_chat_completion.py`, everything is all right. But when I use...
Thank you for this amazing effort! I would like to fine-tune code llama on my own Python code, let's call it MyPackage for now. Ultimately, I would like to ask...
Hi, I'm testing codellama, and I would like a guide on how to enable it to accept the sending of 100K input tokens. From what I understand, this is done...
When I try to run a model .. ``` torchrun example_js.py \ --ckpt_dir CodeLlama-13b-Instruct \ --tokenizer_path CodeLlama-13b-Instruct/tokenizer.model \ --max_seq_len 1024 --max_batch_size 4 --nproc_per_node 2 ``` example_js is the same as...
Hello! I signed up to download the Code-Llama model from Meta. I received the email with the Unique Custom URL. **However, when I attempt to download the model, the script...