fooSynaptic
fooSynaptic
I mean the 4. Download the processed data and pretrained model
@dondon2475848 thanks, I already processed the cnn-daily corpus. But i got upset exception when i want to run the becxer/pointer-generator during the tensorflow api updated. Can you give me some...
哇,不能直接跑的项目可真是让人一眼难尽呢
老哥稳啊
same issue
我好像发现了问题,这个prepare的程序处理的过程中,对于多个passage只保留了passage_token_ids却没有保留passage_tokens,因此对于上面那个问题可以用passage_token_ids来绕过求句子长度的问题,但是对于后面生成计算出来的结果文本时只有token的idx信息是不够的,这个问题应该可以通过检查prepare来解决 ``` {'answer_spans': [[12, 25]], 'answers': ['1.《将夜》2.《择天记》3.《冒牌大英雄》4.《无限恐怖》5.《恐怖搞校》6.《大国医》7.《龙魔导》。', '《大唐悬疑录:长恨歌密码》、《风雪追击》、《草原动物园》、《有匪2:离恨楼》。', '我们住在一起、月都花落,沧海花开、天定风华、寻找爱情的邹小姐、应许之日、星光的彼端、他来了,请闭眼。'], 'passages': [{'passage_token_ids': [304, 9027, 5274, 10069, 11144, 2439, 7282, 5935, 9027, 8917, 10208, 10208, 16431, 14496, 1, 6204, 5918, 1694, 16706,...
完整的debug过程可以通过修改代码: ``` def find_best_answer(self, sample, start_prob, end_prob, padded_p_len): """ Finds the best answer for a sample given start_prob and end_prob for each position. This will call find_best_answer_for_passage because there are...
不过我有个问题,rep主提到的self.attention部分的实现为何我没有找到,难道rep主把attention flow当做了self-attention?
the lbfgs is low level optimized