helo-word
helo-word copied to clipboard
Team Kakao&Brain's Grammatical Error Correction System for the ACL 2019 BEA Shared Task
I followed the instructions for the low-resource track. After the DAE step, I get 0.33 score with evaluate.py on the best model. But on training with the 3k dev set...
when I replicate this model by the instruction on track1, I just got the f score about 28 on valid set, less than 10 on test set. Have anyone replicate...
Ggh
Hhhhh
In paper you ensemble 5 models with different arch , I wonder how to achieve this. does fairseq support this ? it will be nice if there are some scripts
Is it possible to upload the models and perhaps have a documentation on how to correct using the trained models (whether supplied or not)
Hi, thank you for releasing your code! I ran the preprocessing code `preprocess.py` and meet a runtime error. ``` INFO:root:skip this step as /workspace/helo_word/data/conll2014 is NOT empty INFO:root:STEP 0-8. Download...
Hi Could explain a way to incorporate domain specific corpus to train the model? My work involves identifying n-grams prevalent in medical texts, such as "sudden infant death syndrome" which...
After read your paper and code, I have a question about the spell correction phase. Had you applied spell correction on pre-train dataset? Or you only applied this on fine-tune...