Results 9 comments of Hanlard

> > trying to generate with 4 rtx 3090: > > ``` > > fairseq-generate \ > > bin \ > > --batch-size 1 \ > > --path 12b_last_chk_4_gpus.pt \...

> 请问你的这种数据预处理方式可以用于 BertForQuestionAnswering 的输入吗? 是一样的,这是tansformers包里边BertForQuestionAnswering的demo: Examples:: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" input_text = "[CLS] " +...

I meet the same problem with you!

> 在demo 的question_answering.py文件中,有一个方法 > elif(xingzhengjibie == "镇"): > upper_address = get_xian_address(address) > if(len(ret_dict) == 0 and upper_address!=0): > ret_dict = get_xian_plant(upper_address,ret_dict) > if(len(ret_dict) >0 ): > ret_dict['list'].append({'entity1':address,'rel':'属于','entity2':upper_address,'entity1_type':'地点','entity2_type':'地点'}) > 其中get_xian_address(address)返回值为空,所以导致问答界面检索不出答案,请问这个方法可以改进吗? 好奇问一下,你是怎么进入调试功能的。那个manage.py文件怎么调试呢?

> I also got F1 for trigger classification around 66 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got F1...

支持多个GPU,但是会出现主GPU显存使用率不均衡的问题

还有,可以不可以公布一个中文版

if args.model_path: model = ElectraForPreTraining.from_pretrained(args.model_path) 这条语句ElectraForPreTraining里有一个生成器和一个判别器,使用.from_pretrained是载入哪一个模型呢

> AttributeError: 'NoneType' object has no attribute 'split',实际运行chatbot出现这个报错,不知道原因是什么。 answer_search.py里面也有要设置用户名的地方,改过就ok了