aitextgen icon indicating copy to clipboard operation
aitextgen copied to clipboard

Fine Tuning not working

Open NomanSaleem4 opened this issue 4 years ago • 5 comments

Fine Tuning one not working https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing

I am facing an error

TypeError: init() got an unexpected keyword argument 'show_progress_bar'

Solution required at urgent basis. Thanks a

NomanSaleem4 avatar Aug 21 '20 16:08 NomanSaleem4

Fine Tuning one not working https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing

I am facing an error

TypeError: init() got an unexpected keyword argument 'show_progress_bar'

Solution required at urgent basis. Thanks a

Also facing the same issue during while training the new gpt-2 model as mentioned here: https://github.com/minimaxir/aitextgen/tree/master#quick-examples

Seems like fine-tuning and training are both throwing the same error:

Line: ai.train(data, batch_size=16, num_steps=5000)


TypeError                                 Traceback (most recent call last)
<ipython-input-8-f3cb7ed458fd> in <module>()
     22 # Train the model! It will save pytorch_model.bin periodically and after completion.
     23 # On a 2016 MacBook Pro, this took ~25 minutes to run.
---> 24 ai.train(data, batch_size=16, num_steps=5000)
     25 
     26 # Generate text from it!

/usr/local/lib/python3.6/dist-packages/aitextgen/aitextgen.py in train(self, train_data, output_dir, fp16, fp16_opt_level, n_gpu, n_tpu_cores, max_grad_norm, gradient_accumulation_steps, seed, learning_rate, weight_decay, adam_epsilon, warmup_steps, num_steps, save_every, generate_every, n_generate, loggers, batch_size, num_workers, benchmark, avg_loss_smoothing, save_gdrive, run_id, progress_bar_refresh_rate, **kwargs)
    562             train_params["distributed_backend"] = "ddp"
    563 
--> 564         trainer = pl.Trainer(**train_params)
    565         trainer.fit(train_model)
    566 

TypeError: __init__() got an unexpected keyword argument 'show_progress_bar'

Nomiluks avatar Aug 21 '20 17:08 Nomiluks

@NomanSaleem4 I have spent sometime and figured out that pytorch-lightning library has changed few things in their code. Which is causing this error.

pip3 install pytorch-lightning==0.8.4

Downgrading pytorch-lightning worked for me.

Nomiluks avatar Aug 21 '20 18:08 Nomiluks

Thanks it worked

NomanSaleem4 avatar Aug 21 '20 18:08 NomanSaleem4

I am having the same issues. In fact, after installing all those packages, I got the error message below:

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.75 GiB total capacity; 14.41 GiB already allocated; 18.88 MiB free; 14.41 GiB reserved in total by PyTorch)

AdaUchendu avatar Nov 27 '20 21:11 AdaUchendu

Fine Tuning one not working https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing

I am facing an error

TypeError: init() got an unexpected keyword argument 'show_progress_bar'

Solution required at urgent basis. Thanks a

remove batch_size. i had the same error

breadbrowser avatar Jul 16 '22 14:07 breadbrowser