NEWbie0709
NEWbie0709
Hi, thanks so much for your detailed explanation and for clarifying the current state of optimization and quantization in CURLoRA. I wanted to let you know that adding the following...
Other than that, can I ask for advice on training a 22k sample dataset on an 8B model? Since using CurLoRA will freezes the base model, will it require more...
Sorry, can I ask how to perform inference with the saved model or checkpoint? Because when I use it directly, it shows random results 
here are the code i using ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer # ── 1. Load tokenizer & model ─────────────────────────────────────────────────── model_id = "final_curlora_merged_model_rank16" print("Loading model – please wait...
Hi, I am also facing this issue when running the code. This is the error and the code I used. ``` (hqq) user@i9-4090:/mnt/c/Users/i9-4090/Documents/tianyi$ python testing.py Warning: failed to import the...
Hi can we use local model/litellm model for the llm_callable? currently im running the unusual prompt validator and facing the same issue > Error getting response from the LLM: litellm.AuthenticationError:...
i tried running with Qwen-72B-instruct and this is the error i got 
Hey, do you mind sharing the code for running the Qwen2.5-Coder-32B-Instruct or any other code related to Qwen2.5? I'm currently running Qwen2.5-72B-Instruct, but I keep getting this error. #213 ```...