Ariel N. Lee

Results 7 comments of Ariel N. Lee

I used 3 A100 80GB gpus for 1.6-34b and 1 A100 80GB for 1.6-mistral-7b. note: I've only tried this for low rank fine-tuning, not full! https://github.com/arielnlee/LLaVA-1.6-ft

> > I used 3 A100 80GB gpus for 1.6-34b and 1 A100 80GB for 1.6-mistral-7b. note: I've only tried this for low rank fine-tuning, not full! https://github.com/arielnlee/LLaVA-1.6-ft > >...

> Hey, @arielnlee Do you have a notebook for fine-tuning 1.6-34b? I don’t, but I can throw one together this week!

> > > Hey, @arielnlee Do you have a notebook for fine-tuning 1.6-34b? > > > > > > I don’t, but I can throw one together this week! >...

> > > > Hey, @arielnlee Do you have a notebook for fine-tuning 1.6-34b? > > > > > > > > > I don’t, but I can throw one...

> > Hi, how do you know the training was effective? Did you use the default training setting? I LoRA with default parameters and basically no improvement. > > I...

Ofc, glad you found it useful! I'm sure the author's version is far superior (