ManojAkondi
ManojAkondi
Hi @danielhanchen, Thank you for your response. I'm unsure about the inner workings of get_peft_model in Unsloth, but assuming it functions similarly to other peft methods, it should freeze the...
Hi @lapp0, Here is the traceback: ``` Traceback (most recent call last): File "/home/ubuntu/LLMOps/train/train.py", line 501, in main() File "/home/ubuntu/LLMOps/train/train.py", line 497, in main training_function(args) File "/home/ubuntu/LLMOps/train/train.py", line 445, in...
Hi @lapp0 Seems like we both are facing similar issue. I tried removing unsloth from my code and trained it with huggingface utilities, it went well. But I seriously want...
Here is the pip freeze: [requirements.txt](https://github.com/unslothai/unsloth/files/15253065/requirements.txt) Here is the full training script: [link](https://github.com/mano3-1/LLMOps/blob/master/train/train.py) This is how I trigger the training scripts: `python train.py --max_seq_length 4000 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --sm_train_dir...
Hi @lapp0 , Although I named it requirements.txt, I have extracted it by doing `pip freeze`. Kindly check the file, you will find versions of all the libraries
Hi everyone, I encountered a similar error with llama3 - 7b. ``` ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than...
hey, I'm curious if someone has figured out a fix to this?
Hi, I followed @danielhanchen 's notebook and compared the parameters with mine. When I change the optimizer from paged_adamw_32bit to "adamw_8bit", the nan issues are not coming up. @lapp0 I...