salman

Results 37 comments of salman

I've updated my PR @ebsmothers with the changes we discussed :)

Yeah it's passing locally. It looks like it's failing the second test case. Are there more detailed logs? I sometimes fail some of the tests on my mac because the...

Thanks so much for your help debugging :) I'll keep that in mind for the future!

Thank you for *your* patience!!

Full fine tuning using the low memory config runs fine in [colab](https://colab.research.google.com/drive/19LEdD5Swi4gY5NJ8ZPJ2rnqFSYdrP0qa?usp=sharing). See the `wandb` run [here](https://wandb.ai/salman-mohammadi/torchtune_codellama_testing/runs/zobzkhd3). I'll let it run for ~30 minutes for now, unless you need information...

Try now? The wanbd link should work too. It was pretty straightforward! Unfortunately, none of the models can fit in the free GPU since `bf16` isn't supported on the free...

> Can't you just plug the code-llama2 models into our existing LoRA recipes? (Apologies if I'm missing something obvious here though) Sorry, by "not implemented" I just mean that the...

I've added `lora_` and `qlora_` `code_llama_{}b` models, and also added a `qlora_llama2_70b` while I was at it. `torchtune/models/llama2/_model_builders.py` is getting pretty chunky. Do you guys care about this/would you want...

Thanks so much for the kind feedback @kartikayk :) I've always wanted to contribute to the pytorch ecosystem - it's really nice to get the opportunity to work with such...