Results 152 comments of HT Liu

@FrankWork Yeah, At the beginning, I just use the f1-metric in **sklearn** and the result is quite lower than the paper. Further, I use the scorer.pl to evaluate, it is...

@JankinXu Hi, I have re-implemented the model with **pytorch** as well. However I can not reproduce the 82.4 F1-Score ; a little lower than yours, about 79%, which confused me...

@FrankWork my model code is here https://paste.ubuntu.com/26385600/ And using Adam to optimze the CrossEntrop Loss.

@FrankWork Yeah, I have tried to add L2 regularization only in out_linear layer, however, it does not improve the performance.... it's really confusing..

@FrankWork Thanks. I use the following code in pytorch to add L2 regularization: ``` params=[] for k, v in dict(model.named_parameters()).items(): if k.startswith('out_linear'): param += [{'params': [v], 'lr':0.001, 'weight_decay': 0.01}] else:...

@JankinXu Hi. Two dropout layers indeed can accelerate convergence and improve performace. At the begining, there is only one filter in my code, so the performance is lower. After using...

Hi, thanks for your reply. I have tried `randomized document order` and all the documents would also be mixed together. I have to create different projects for different document and...

Hi, my request is that different users can annotate different docs in **one project**. - Project1: - user1: doc1, doc2, doc3 - user2: doc4, doc5 ..... - user3 ..... However,...

@Mrlyk423 Thanks for your reply. It is quite clear. That is to say, the formula (10) ![image](https://user-images.githubusercontent.com/10215945/34427614-46649060-ec7f-11e7-84ef-8cbbf67b232f.png) It is just a stack of each relation's score **o_r = M_r s...

@Mrlyk423 Thanks, your reply helps me a lot in understanding the paper. Best.