salman
salman
bump bump. Is this still planned @maximegmd? If not, happy to pick it up. It's been invaluable to me so far, so would be happy to see other users be...
> This just needs to be updated to use the latest util functions, I will do it tomorrow, then it should be ready to merge unless I am not seeing...
> I assume this is specifically on MPS? cc @msaroufim for thoughts here Yeah just on my machine, it's been working fine on Linux.
> @SalmanMohammadi will the custom recipe tutorial in #1196 address this? Thanks for the ping, I hadn't seen this issue! This is some great feedback and I'll definitely be incorporating...
I picked this up, hope it's not poor ettiquette :) https://github.com/pytorch/torchtune/pull/847
Happy to say this is implemented @sgupta1007 :) Feel free to try and update if there's any issues, otherwise, would it be appropriate to close this issue?
Could you try the following command? ``` !tune run lora_finetune_single_device --config code_llama2/7B_qlora_single_device checkpointer=torchtune.utils.FullModelMetaCheckpointer checkpointer.checkpoint_dir=/codellama-main/llama/CodeLlama-7b-Instruct/ tokenizer.path=/codellama-main/llama/CodeLlama-7b-Instruct/tokenizer.model checkpointer.output_dir=/llm/ checkpointer.checkpoint_files=[consolidated.00.pth] dtype=fp32 ``` `code_llama2` models and recipe configs are under their own folder, rather...
My apologies, `code_llama2` isn't in the release yet! You can follow the process [here](https://github.com/pytorch/torchtune/issues/870#issuecomment-2077268104) to install from source to get it working. Let me know how you get on.
Thanks @RdoubleA :) I've fixed the linting. I tried setting it up this way, but in the `forward` for `TransformerClassifier` I also use the input token ids to grab the...
Note, the type hints are wrong for the component and model builders (the docs are correct). I think you're right that my pre-commits still aren't working right. I'll fix those...