open_llama
open_llama copied to clipboard
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
Hi, I made a YouTube video on how to run OpenLLaMa on Google Colab with Transformers. I am adding it to a resources section along with a link to a...
Hi, I finetuned the 300bt checkpoint with alpaca+sharegpt but while with llama code generation works fine with openllama the backticks are missing around code blocks. did you eliminate those in...
Huggingface -> Hugging Face
Hey Thanks very much for the effort. Looking at the released model size, Wondering if it can fit in Free Colab?
Hi OpenLM team, Thank for such great contribution to the open source community. At [PythaiNLP](https://github.com/PyThaiNLP/WangChanGLM) we've been trying to replicate Alpaca-like instruction followers for non-latin languages (Thai, Japanese, Vietnamese and...
Are there any plans to train a 30b replica of Llama or is the 7b enough to meet your purposes of comparison?
Excellent work. For the checkpoint at 300B tokens, would you mind sharing the lm-evaluation-harness metrics?
I was going through the readme and noticed [here](https://github.com/openlm-research/open_llama#evaluation) that this model is performing better than the 7B llama on many things, even though its trained on a fifth of...
First of all, I would like to express my gratitude for making this available on such a permissive license, it opens new doors to both researchers and industries. 1) Could...
Hello. There is a corpora call Ubercorpus for Ukrainian you can add to the project: https://lang.org.ua/en/corpora/#anchor4 In a few days will be UNLP, an event from Ukrianian NLP community and...