question_generation icon indicating copy to clipboard operation
question_generation copied to clipboard

Does this support only SQUAD dataset?

Open nabinkhadka opened this issue 4 years ago • 5 comments

I was wondering if I can simply give any dataset. It looks like it needs questions, answer and context. So I suppose the following dataset for example is sufficient to do training?

questions | answer | context q1 | a1 | context 1 q2 | a2 | context 2 q3 | a3 | context 3 q4 | a4 | context 4

This way can I train by loading the dataset and leaving rest of the notebook same?

valid_dataset = nlp.load_dataset('csv', data_files='/content/drive/My Drive/context_created.csv', split='train[:10%]')
train_dataset = nlp.load_dataset('csv', data_files='/content/drive/My Drive/context_created.csv', split='train[10%:]')

nabinkhadka avatar Oct 19 '20 15:10 nabinkhadka

Yes, any data set can be used to train the models.

If you input the data in squad format, and change the directory of the train and valid split generator in squad_multitask.py, you'll be good to go.

WadoodAbdul avatar Oct 22 '20 05:10 WadoodAbdul

@WadoodAbdul have you tried it? Were you able to load the model correctly? If so can you give me the snippet please?

Thank you

nabinkhadka avatar Oct 22 '20 08:10 nabinkhadka

Thanks for answering the issue @WadoodAbdul .

For now squad_multitask is tied to SQuAD dataset, but it's possible to use your own QA dataset as @WadoodAbdul said

If you don't want to use that script then you can use a custom dataset as follows.

  1. Process your dataset according to the format for the model (described in readme). You can use the code from prepare_data.py script
  2. Make sure the dataset returns source_ids, target_ids and attention_mask
  3. Use your dataset here instead of loading the cached data.

Rest of the code can stay the same. Let me know if this helps or not.

patil-suraj avatar Oct 22 '20 14:10 patil-suraj

Hi patil-suraj, thank you for the nice repo. Is it possible for you to show us how to fine tune using custom dataset, I've tried the changing link in squad_multitask.py approach but somehow it keeps failing and I had no luck with the direct loading from the nlp.load_dataset in prepare_data.py

The followings are my datasets:

  • https://raw.githubusercontent.com/hariesramdhani/test_repo/main/dev-v1.1.json
  • https://raw.githubusercontent.com/hariesramdhani/test_repo/main/train-v1.1.json

Thank you very much

hariesramdhani avatar Dec 08 '20 10:12 hariesramdhani

@hariesramdhani I'm not sure if you are still stuck, but you need to include data_files= '/path/to/file' in the nlp.load_dataset() (source: https://huggingface.co/docs/datasets/v0.4.0/add_dataset.html) Example: natural_question = nlp.load_dataset("natural_questions", data_files= '/content/drive/My Drive/natural_questions', split=nlp.Split.TRAIN)

pat266 avatar Mar 19 '22 04:03 pat266