question_generation
question_generation copied to clipboard
Does this support only SQUAD dataset?
I was wondering if I can simply give any dataset. It looks like it needs questions
, answer
and context
. So I suppose the following dataset for example is sufficient to do training?
questions | answer | context q1 | a1 | context 1 q2 | a2 | context 2 q3 | a3 | context 3 q4 | a4 | context 4
This way can I train by loading the dataset and leaving rest of the notebook same?
valid_dataset = nlp.load_dataset('csv', data_files='/content/drive/My Drive/context_created.csv', split='train[:10%]')
train_dataset = nlp.load_dataset('csv', data_files='/content/drive/My Drive/context_created.csv', split='train[10%:]')
Yes, any data set can be used to train the models.
If you input the data in squad format, and change the directory of the train and valid split generator in squad_multitask.py, you'll be good to go.
@WadoodAbdul have you tried it? Were you able to load the model correctly? If so can you give me the snippet please?
Thank you
Thanks for answering the issue @WadoodAbdul .
For now squad_multitask
is tied to SQuAD dataset, but it's possible to use your own QA dataset as @WadoodAbdul said
If you don't want to use that script then you can use a custom dataset as follows.
- Process your dataset according to the format for the model (described in readme). You can use the code from
prepare_data.py
script - Make sure the dataset returns
source_ids
,target_ids
andattention_mask
- Use your dataset here instead of loading the cached data.
Rest of the code can stay the same. Let me know if this helps or not.
Hi patil-suraj, thank you for the nice repo. Is it possible for you to show us how to fine tune using custom dataset, I've tried the changing link in squad_multitask.py approach but somehow it keeps failing and I had no luck with the direct loading from the nlp.load_dataset in prepare_data.py
The followings are my datasets:
- https://raw.githubusercontent.com/hariesramdhani/test_repo/main/dev-v1.1.json
- https://raw.githubusercontent.com/hariesramdhani/test_repo/main/train-v1.1.json
Thank you very much
@hariesramdhani I'm not sure if you are still stuck, but you need to include data_files= '/path/to/file'
in the nlp.load_dataset() (source: https://huggingface.co/docs/datasets/v0.4.0/add_dataset.html)
Example: natural_question = nlp.load_dataset("natural_questions", data_files= '/content/drive/My Drive/natural_questions', split=nlp.Split.TRAIN)