qlora
qlora copied to clipboard
Fine tune flan t5 xl and flan t5 xxl using qlora and has a problem with learning rate 0.0 and loss 0.0 ?
Fine tune flan t5 xl and flan t5 xxl using qlora and has a problem with learning rate 0.0 and loss 0.0 ? Can anyone resolve this problem ? Thanks
learning rate of 0 makes no sense
Even I'm facing the same issue in my case I used flan t5 base with learning_rate=2e-4 and training_loss is 0.0. If anyone solved this issue let me know
trainable params: 884,736 || all params: 168,246,528 || trainable%: 0.5258569139685307 {'loss': 0.0, 'learning_rate': 0.0, 'epoch': 0.21} Same issue here, any hint might be helpful
set the fp16 param to False, it solves my issue
Hi, I'm also fine tuning Flan-T5-xl. Yes, fp16=True
would cause problem of loss.
Here, I have another question: What optimizer you are using? The Paged adamw or Adafactor (remaining the same as T5)? thanks!
Hi, were you able to use Qlora with Flan-t5?
I was trying to do it but I got this error
ValueError: Trying to set a tensor of shape torch.Size([4096, 4096]) in "weight" (which has shape torch.Size([8388608, 1])), this look incorrect.
here is my code
model_name = "google/flan-t5-xxl"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True
)
model = AutoModelForSeq2SeqLM.from_pretrained( #for "google/flan-t5-xxl"
model_name,
quantization_config=bnb_config,
device_map='auto',
torch_dtype=torch.bfloat16,
)
model.config.use_cache = False
model.config.pretraining_tp = 1
model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
the error occur earn I call the generate here
# Generate model output using greedy decoding
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=1, do_sample=False,top_p=None)
can you please help me with this