question_generator icon indicating copy to clipboard operation
question_generator copied to clipboard

Fine-tuning pipeline for custom dataset

Open danielduckworth opened this issue 5 years ago • 11 comments

Hi, I think this is great work.

Would you consider adding a notebook for the fine tuning pipeline?

I have my own dataset of multiple choice questions with answers and distractors that I would like to try fine tuning with.

danielduckworth avatar Sep 22 '20 09:09 danielduckworth

Hey, thanks!

The qg_training notebook contains the code for fine tuning a pretrained T5 model. You can try using that if you like.

If you can get your data into a csv with the questions in one column and the answers and contexts in another, you should be able to load it into QGDataset and run the notebook with your dataset.

I only trained the model to generate the correct answer, and separately used NER to find the other multiple choice answers in the text. I'm not sure how it would perform if you want it to generate the answers and distractors at the same time. You could try concatenating them all together and separating them with the answer token like:

"<answer> answer1 <answer> answer2 <answer> answer3 <answer> answer4 <context> context"

AMontgomerie avatar Sep 24 '20 02:09 AMontgomerie

Excellent, I'll have a play.

danielduckworth avatar Sep 24 '20 07:09 danielduckworth

I'm getting an error with the last cell


TypeError Traceback (most recent call last) in 11 for epoch in range(1, EPOCHS + 1): 12 ---> 13 train() 14 val_loss = evaluate(model, valid_loader) 15 print_line()

TypeError: train() missing 2 required positional arguments: 'epoch' and 'best_val_loss'

danielduckworth avatar Sep 24 '20 07:09 danielduckworth

Huh that's strange. train() should be train(epoch, best_val_loss). I'll update it.

AMontgomerie avatar Sep 24 '20 08:09 AMontgomerie

Thanks. I had to reduce the batch size to 1 on a home GPU, but it looks like it's working. I haven't added the distractors yet. I'll just train the context, question and answer for now and see how it goes.

danielduckworth avatar Sep 24 '20 08:09 danielduckworth

Yes it's quite GPU intensive. I think if you load the notebook in Google Colab and change runtime type to GPU you can probably increase the batch size to 4.

AMontgomerie avatar Sep 24 '20 08:09 AMontgomerie

Hi Adam, I've run the training process and have the 'qg_pretrained_t5_model_trained.pth' model file.

I modified questiongenerator.py to point to a local folder for the model but it needs a but of config stuff. How do I package this trained model for huggingface transformers? Is there some docs I can look at?

danielduckworth avatar Sep 26 '20 01:09 danielduckworth

Never mind, I found model.save_pretrained() and tokenizer.save_pretrained()

danielduckworth avatar Sep 26 '20 01:09 danielduckworth

Some of the generated questions are in German.. What did I do wrong?

danielduckworth avatar Sep 26 '20 01:09 danielduckworth

That's strange. Are you sure there's no German text in your dataset?

AMontgomerie avatar Sep 26 '20 02:09 AMontgomerie

It's definitely all English. But maybe there are some unicode errors? The csv is utf-8 and imported with pandas with latin1 encoding.

danielduckworth avatar Sep 26 '20 05:09 danielduckworth