BERT-QnA-Squad_2.0_Finetuned_Model
BERT-QnA-Squad_2.0_Finetuned_Model copied to clipboard
Transformers
Excellent work, thanks for sharing.
Any chance your code will be upgraded from Huggingface (HF) pytorch_pretrained_bert to the latest, HF transformers? I'm looking at Test_batch.py, trying to figure-out what areas of code will be impacted. As stated by HF, transformer models always returning tuples might be an issue. I'd like to try RoBERTa and ALBERT models once they are added to the HF transformers framework.
I substituted my finetuned BERT_large whole-word-masking SQuAD2.0 model into the shell script for running Test_batch.py, and it seemed to improve results with changed contents in Input_file.txt. One cell in Run_test.ipynb running BERT_base, and one cell running BERT_large_wwm, makes for an easy, quick comparison.