lag-llama
lag-llama copied to clipboard
How to finetune this model
Thanks for releasing this foundation model! I wonder if you have code that demonstrates how to finetune Lag-Llama with custom training data? Thanks!
coming up after a short recoup! stay tuned. Essentially since its a gluonts Estimator, one should be able to estimator.train()
on your data but we need to remove some bespoke fields that were left behind in the lightning callbacks...
Yes, as @kashif says, the fine-tuning scripts are coming soon. Scripts to load datasets in custom formats are also coming soon. Thank you for your patience.
@zhilif have a look at the colab https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing
@kashif thank you for providing the code. Unfortunately, I got this error during fine-tuning:
TypeError: model
must be a LightningModule
or torch._dynamo.OptimizedModule
, got LagLlamaLightningModule
@shahrokhvahabi try to git pull, I believe its somehow running an older code
@kashif I deleted the lag-llama and cloned again it from the repository. But this error occurred again.
@shahrokhvahabi its due to the newer lightning version you have... so try do to:
!pip uninstall -A gluonts
!pip install -U gluonts[torch]
@kashif I did but the error is persistent.
ah sorry i meant to say:
!pip uninstall -A lightning pytorch-lightning gluonts
!pip install -U gluonts[torch]
ah sorry i meant to say:
!pip uninstall -A lightning pytorch-lightning gluonts !pip install -U gluonts[torch]
Thank you for the reply @kashif . I did as you mentioned above. Could you please give me another suggestion to avoid this error?
yeah not sure what more i can do... somehow your setup is sucking in a newer version of lightning
perhaps
!pip install lightning==2.1.4 pytorch-lightning==2.1.4
@kashif Could you please send those files in the model you updated so that I can replace them manually?
make a new issue instead of hijacking this one as the original author is getting spammed unnecessarily closing this