chavinlo

Results 83 comments of chavinlo

> Did you install xformers? [huggingface/diffusers#1343](https://github.com/huggingface/diffusers/issues/1343) yes but I don't think xformers has anything to do with the validation process...

> Can you share the result file when you finish? sure, but theres a LORA repo that supposedly gives better results than the current one, not sure... heres those: https://huggingface.co/chavinlo/alpaca...

> > Hello, first of all thank you for releasing the training code for alpaca, we really appreaciate it. > > I am running the fine-tuning script on an 4xA100-SXM4-80GB,...

> are you running the released code? best to adapt from there. Yes I am running the fine-tuning code from this repo.

@joaopandolfi @devilismyfriend I will be progressively uploading the checkpoints, it's getting saved every 200 steps, total is 1200. Heres the 200step checkpoint: https://huggingface.co/chavinlo/alpaca-native/tree/main aka 17%

> @chavinlo thank you for your work! Are you able to train the LORA on 13b (or potentiall larger)? Also, since the loss stops decreasing after ~1 epoch, it might...

> @chavinlo > > > hi @chavinlo , > > are you running the released code? best to adapt from there. > > Thanks > > and @charliezjw, > >...

> The reason for some of these issues is explained in [this note](https://github.com/tatsu-lab/stanford_alpaca#warning). > > Feel free to reopen if it doesn't fully resolve the mysteries :) My issue is...

@lxuechen Can you reopen this issue? The original problem was about speed, not about layers. Aditionally, I have tried cleaning the instance, and I still get the same speed.

[report.txt](https://github.com/tatsu-lab/stanford_alpaca/files/10996673/report.txt) here is the `nvidia-smi -q` report of my gpus...