Sebastian Raschka
Sebastian Raschka
I tried this also with Llama 3 and it seemed to work fine for me there as well. Here are my steps: ```bash litgpt download --repo_id meta-llama/Meta-Llama-3-8B-Instruct --access_token ... litgpt...
Sorry to hear about the issues here. I remember having similar problem on a particular machine. Lowering the number of workers in the TinyData code (https://github.com/Lightning-AI/litgpt/blob/main/litgpt/data/tinystories.py) fixed it for me....
This wouldn't change any default behaviors, and it would be analogous to the analogous to the `initial_validation` argument). Does it look ok to you @awaelchli and @carmocca or do you...
Thanks for the feedback. In my opinion, it's worth adding it because the initial evaluation was considered too expensive, so it's optional. For the same reason, the final validation can...
I was updating the Loom demos for the Studios in the Readme, and the validation can take up a relative long portion of time when doing quick demos (even longer...
Glad to hear that solves it. I remember we pinned the bitsandbytes version due to some issues, but I didn't recall exactly what these were. https://github.com/Lightning-AI/litgpt/blob/9538d6a8194b6204601dea7eb10bc24c69678494/pyproject.toml#L36
Some people manually upgrade bnb after installing LitGPT. E.g. this also happened to @t-vi . I am currently working on a patch that raises a warning in this case.
The limitation you mentioned would be for selectively showing the LoRA args, correct? An alternative would be to show all finetune arguments (full, adapter, lora). I think users will know...
> On 2) Could we keep it pretraining from scratch by default? If not, then there would have to be a very loud warning IMO, and a way to opt...
To summarize from our meeting this morning, an easier path forward might be to use ```bash litgpt finetune_full litgpt finetune_lora litgpt finetune_adapter litgpt finetune_adapter_v2 ``` where we also keep `litgpt...