Cao_enjun
Cao_enjun
https://github.com/abisee/pointer-generator/blob/b29e986f24fdd01a6b6d6008187c5c887f0be282/attention_decoder.py#L101
Hi, see! I am working on a new data set with 6 times bigger than your CNN data set . Whether need I to add more parameters of the model...
https://github.com/abisee/pointer-generator/blob/a7317f573d01b944c31a76bde7218bcfc890ef6a/attention_decoder.py#L173 Hi, abisee! I am confused about the codes in this line. In your paper, the function is V[s_t, h*_t] + b But in the codes, you use rnn_output instead....
Hi, abisee! Thanks for sharing this good work! I have some question about your paper in Experiments part. You said that with the coverage loss ratio set to 1.0 the...
Hi , zhiguowang! I found that you use dropout in training and when testing with out dropout. When the model is trained, add dropout. If the test is not added,...
https://github.com/zhiguowang/BiMPM/blob/33cc8fe5d450f432a6843bc05cad29c6ce9f5714/src/SentenceMatchModelGraph.py#L158 hi zhiguowang ! why not use prob after softmax?
https://github.com/lanwuwei/SPM_toolkit/blob/3e2cb35681e9f31bfcc66afde2159bbf394056df/DecAtt/main_quora.py#L56 cat(): argument 'tensors' (position 1) must be tuple of Tensors, not generator can I fix this bug use this line? torch.cat(tuple([dict[word].view(1, -1) for word in lsent]))
hello, I have some questions about dropout in VAT. If I use dropout in VAT, the output distribution will change even without perturbation. thanks!
https://github.com/thinline72/toxic/blob/c4f50a0560d312da8ab60e2492a2e72c54cc3f55/sergeif/bigru_focal_loss.py#L52 Thanks for sharing this good work ! I have some problems with this line, how can I generate this file ? @ifserge I will appreciate for your help !
您好老师,想请教一下,在做序列标注任务时,标注序列Y是我们可以获得的,那么为什么不能用极大似然估计或者贝叶斯估计那样直接求,而是把Y看作隐变量用em算法呢?直接把Y看作分类标签作为观测数据是否合理?