Carlos Mocholí
Carlos Mocholí
We recently added resuming capabilities to the pertaining scripts: #230, #229. If you want to give it a shot, the changes to the fine-tune scripts would be the same
Duplicate of #209
You can try the suggestions described in https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/oom.md
qlora finetuning support is tracked in #176. I have a PR open that adds qlora inference support in #253
That might be enough. I don't advertise that we support fine-tuning because I haven't played extensively with it to be confident. Feel free to give it a shot by adding...
What is your `lightning` version? This shouldn't happen using master. You can upgrade it by doing: `pip uninstall -y lightning; pip install -r requirements.txt`
Good catch! I think it was meant to link to https://lightning.ai/blog/how-to-finetune-gpt-like-large-language-models-on-a-custom-dataset Would you like to open a PR with the fix?
The HF leaderboard only supports HF model definitions. Conversion from our format to HF is tracked in #183. Alternatively, you could run the https://github.com/EleutherAI/lm-evaluation-harness which is what the leaderboard uses...
The https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/evaluation.md tutorial goes over this
I just ran the steps and it worked. Did you call `python scripts/prepare_alpaca.py --checkpoint_dir checkpoints/tiiuae/falcon-7b`? Alpaca needs to be process with the specific model tokenizer. Can you describe which steps...