André

Results 35 comments of André

90% is a very high performance actually... The system is expected to have less than 80% of the questions answered correctly.

Do you have a pytorch version of a `BertForQuestionAnswering` model (HuggingFace's module version)? If you have one, yes there is a way to do it, you would just need to...

Your model should be an instance of the class `BertForQuestionAnswering`, a class provided in HuggingFace's [transformers](https://github.com/huggingface/transformers#quick-tour) library. It is a subclass of `torch.nn.module`, i.e. it's a PyTorch model class.

Thanks for the tips @alex-movila ! However, I am afraid they are not compatible with `cdQA` for two reasons: 1) These tips are for the TF version of BERT, while...

> but would inference on CPU really be feasible? Currently, inference on CPU is feasible but slow (about 10 to 20s per inference depending on the question and the CPU)....

> Would DistilBERT be one way to achieve faster inference? Huggingface have already fine-tuned it on SQuAD v1.1: https://huggingface.co/transformers/model_doc/distilbert.html#distilbertforquestionanswering > > In their blog post they quote 60% speedup in...

Hi, Answering to your questions: > First: Can i directly use SQuAD for chinese (Close-Domain)QA task? I don't really understand it, SQuAD is a QA dataset in English, you would...

yes, but it would be useful to require contributors to use black -> it reformats automatically the code by running `black . `. It is very useful when we write...

We can add a sort of requirements for devs, which include pre-commit hooks. Such as OpenMined does with pysyft: https://github.com/OpenMined/PySyft/blob/dev/requirements_dev.txt

Hi, It was a problem with the available models. I have uploaded a fixed version.