llama2.c icon indicating copy to clipboard operation
llama2.c copied to clipboard

Inference Llama 2 in one file of pure C

Results 146 llama2.c issues
Sort by recently updated
recently updated
newest added

Hi, I want to ask how one can split a dataset to train/val splits. In the **tinystories.py** I don't quite understand the comment: >train/test split. let's use only shard 0...

Hi, does anyone know if there is a script/code to reproduce val loss using provided "*.bin" models? I've tried myself and can't get the numbers shared. Thank you.

I have 64GB RAM macbook pro with 2TB SSD, and still cannot export 70B model. It looks `export_meta_llama_bin.py` is loading full model to RAM and then exports it. Consider "streaming"...

I have observed a significant degradation in the quality of generated text when applying q8 quantization. The models where trained in float16. During training, I saved the q8-quantized model alongside...

i want to train something other than tiny stories. i have plain file list. how to train on them?

Hi, thank you for the great work. I am curious if there are any Keras based implementations for tiny llamas available? Thank you.

If we can create models targeted to a custom dataset but not about "stories", then perhaps it would become useful. For instance, how do we make it work for a...

hi, I am trying to convert the llama2 7b model by below script. python export_meta_llama_bin.py ~/projects/75_NLP/llama-main/llama-2-7b llama2_7b.bin it always popup "killed" message. My hardward is i7-12700H 16G RAM & NVIDIA...

Llama-shepherd-cli is a small tool designed to simplify the management and experimentation with Llama2 implementation across multiple languages. Whether you're wrangling tiny llamas models or herding llamas of various sizes,...