dopamine
dopamine
@nghuyong 可能得原因是,ernie-2.0-en里面涉及task_type_vocab_size和对应的embedding吗?
> 可以的,非常感谢,欢迎提交mr 感谢回复。现在还有些疑惑,不太确定ernie2.0的mlm model,和huggingface里面的BertForMaskedLM区别在哪里?不知道您是否有相关经验?
Hi, @luyug Thanks for your awesome work. I have similar question on NQ. Is it possible to give more details to reproduce the results (84.3=MRR@5) on NQ in the paper,...
> Hello, > > Thanks for your interest! I am currently working on building JAX interfaces for tevatron. It will take a week or so before I can get back...
Hi, @luyug Thank you very much. Sorry to disturb again. I wonder if the Tevatron Toolkit can run experiment on MS MARCO document ranking set? Or could you share how...
> The data for document ranking is structured in a similar way as the passage ranking dataset. I think the easiest thing is probably to follow the passage ranking example...
Hi, @luyug I find it hard to reproduce your results for NQ and TQA, and it is lower about 0.5-1.5 points than your reported. I wonder if it's possible to...
Sorry to reply later. I am now following the original DPR toolkit rather than Tevatron. So were your results totally based on Tevatron? I have noticed your shared [results](https://github.com/texttron/tevatron/tree/main/examples/dpr) is...
Thanks, I will check them. In addition, I think the code about how to mine hard-negatives will help, for potentially different details. Another is can you share the first round...
@mihail-amazon Hi, could you help?