long_llama
long_llama copied to clipboard
Finetuning code?
That sounds massively interesting, and while we try to run inference and read the paper, should we expect the release of the finetuning code?
Hi, thanks for interest in our work! That's right, we are currently supporting only inference. We are considering releasing examples for finetuning of our models in pytorch/huggingface API.
@syzymon Is there any plan of releasing the training pipeline (is it based on the EasyLM library)? Thank you!
Hope to see your finetune code ASAP, since your work is very interesting!!!!
The continued pretraining pipeline (used to train long_llama_3b base model) is based on EasyLM.
We are planning to release instruction tuning code in pytorch & checkpoints & examples early next week. Stay tuned!
Will you also be releasing pretraining code? Since the contrastive training seems to be a very important element of your great results, it would be nice if we could try recreating it
We are working on LongLLaMA v2, which will be a bigger release. After that we will release the pretraining code which is in JAX, based on EasyLM codebase - same as used for openllama pretraining. You can expect the instruction finetuning code in pytorch to be out very soon (basically next week). There are no plans to implement FoT pretraining in PyTorch on our side, as our compute is based on TPUs. Stay tuned for LongLLaMA v2 which will definitely be out there in August!
In case you haven't seen, the instruction code is already there! see https://twitter.com/s_tworkowski/status/1687620785379360768 and READMEs in this repo for more details