Xingjian Shi
Xingjian Shi
Which instance are you using? Would you try out the p3.8 instance? Get Outlook for iOS ________________________________ From: xinyual Sent: Wednesday, November 18, 2020 5:37:51 PM To: dmlc/gluon-nlp Cc: Xingjian...
Yes, try out to use p3.8 to verify the NMT scripts.
@szha Also, you may try to turn on AMP support of ELECTRA-pretraining if you have time.
@szha This is not about the vocabulary object itself and is more about how you should revise the pretrained model, e.g., BERT, XLMR. In the implementation, we should still keep...
@hannw Would you provide some profiling scripts that arrive at these numbers? In the new version, we use the default dictionary for storing the mapping: https://github.com/dmlc/gluon-nlp/blob/32e87d4d4aa20a6eb658ee90d765ccffbd160571/src/gluonnlp/data/vocab.py#L114. Also, we are doing...
We may add the continuation training feature to our existing Translation example, ELECTRA example and BERT example.
@szha Would you help review?
@szha Would you take a look?
Let's try to rerun the training with the batch script here: https://github.com/dmlc/gluon-nlp/tree/master/tools/batch#squad-training Basically, we just need to run the following two for SQuAD 2.0 and 1.1 ``` # AWS Batch...
Yes, you can later use the following script to sync up the results. ``` bash question_answering/sync_batch_result.sh submit_squad_v2_horovod_fp16.log squad_v2_horovod_fp16 bash question_answering/sync_batch_result.sh submit_squad_v1_horovod_fp16.log squad_v1_horovod_fp16 ``` After all results (part of the results)...