Emanuela Boros
Emanuela Boros
One assumption that I have is that the performance is computed and reported on dev, instead of test. Any updates on the script? Thanks!
Hello, @xiaoya-li, my results are similar to @Lilin-whale with the your released data files for CoNLL2003. Any updates on the script?
Can you share the specific parameters? Thanks!
> I replicated the experiment result on English CoNLL03 with the latest code ([f80ed26](https://github.com/ShannonAI/mrc-for-flat-nested-ner/commit/f80ed26e4a5012f217f10ab61a3d53d537fb2094)). > And configurations can be found in https://github.com/ShannonAI/mrc-for-flat-nested-ner/blob/master/log/en_conll03.txt. > > Please contact me if you have...
> Hello there, > > I am using bert-base-uncased and haven't changed the config.ini (just commented out bert-large and using bert-base instead) but in your readme the performance for semeval...
Same question.
@maxloosmu What’s the entire output of run.py (there’s a chance it tries to load it on CPU)?
@maxloosmu it looks like it tries to load the model on CPU. Can you try just doing this: ``` import torch import transformers import requests print(torch.cuda.is_available()) ```
Did you try `--max_batch_size 6`?
@ZhuJD-China Great that you solved it. The second error is cuda-related (`RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`. @joeyz0z what is the error stack?