慢半拍

Results 20 comments of 慢半拍

非常感谢 ------------------ 原始邮件 ------------------ 发件人: "knavezl"

> Hi ! > Thanks for your open-source contribution to this fantastic work. > Has this work been accepted by any conference or journal? I want to cite this work...

> > > Hi ! > > > Thanks for your open-source contribution to this fantastic work. > > > Has this work been accepted by any conference or journal?...

> Hi in order to annotate data according to your format, can we do it in python and save it as csv file. Yes, you can.

> Do the users need to annotate the dataset for training,testing,evaluation? or do the users can re-use already annotated datasets available? > > I would like to test this repo...

Thanks for your interest in this work. There are two main reasons why we use bilstm+bert rather than only bert: 1. We formulate both Aspect Term Extraction (ATE) and Target-oriented...

> ``` > (ASOTE) me@me:~/ASOTE/ASOTE$ sh repeat_non_bert.sh 0 101-ASOTEDataRest14-0,101-ASOTEDataRest14-1,101-ASOTEDataRest14-2,101-ASOTEDataRest14-3,101-ASOTEDataRest14-4 nlp_tasks/absa/mining_opinions/sequence_labeling/towe_bootstrap.py --embedding_filepath /home/me/ASOTE/ASOTE/glove.840B.300d.txt --bert_file_path /home/me/ASOTE/ASOTE/bert-base-uncased.tar.gz --bert_vocab_file_path /home/me/ASOTE/ASOTE/bert-base-uncased-vocab.txt --current_dataset ASOTEDataRest14 --data_type common_bert_with_second_sentence_101 --model_name TermBertWithSecondSentence --train False --evaluate False --predict False --crf False...

> Hello, I don't know how to do with the Instructions in the README file, and the module allennlp was not found. Can you give me some suggestions? 1. You...

> 感觉您对我们工作的关注。期望下面能回答您的问题: 1. 实际上,这篇论文专注解决的问题是:给定一个句子,以及出现在句子里的aspect category,预测这些aspect category的情感。也就是说,我们假设aspect category是已知的(或由其它模块来预测),不关注预测句子里是否提到了某个aspect category。这是最常见的aspect category sentiment analysis的设定。(当然,后来也有一些工作,同时识别aspect category和他们的情感) 2. 情感分类的时候,是用的交叉熵损失。训练的时候,如果某个aspect category没有出现在句子中,它对于的部分不会计算损失,也不会参与梯度下降。由于假设aspect category已知(或由其它模块来预测),所以,情感分类的时候,不需要再预测句子里是否包含某个aspect category。 3. 整个模型,ACD的部分其实是在判断句子中是否包含某个aspect category;但是,这个部分的初衷是作为辅助任务发现aspect category相关的词,为了保证attention模块能发挥作用,上下文句子表示部分比较简单,直接用于预测aspect category的话,效果可能不是最好的。

> 1. "为了保证attention模块能发挥作用,上下文句子表示部分比较简单",可以参考这篇论文 Attention is not not Explanation https://aclanthology.org/D19-1002.pdf。 2. 当时ACD部分是试过双向LSTM的,具体结果记不清了,需要找下。记得比较清楚的是,可视化效果会变差,就是attention权重大的词,不一定是我们认为重要的词。