LCF-ATEPC icon indicating copy to clipboard operation
LCF-ATEPC copied to clipboard

codes for paper A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction

Results 13 LCF-ATEPC issues
Sort by recently updated
recently updated
newest added

hello I am confused about the words which are considered in the local context of the aspect? it based only on the index/position or do you use MHSA to choose...

我有一个只有正负情感标注的数据集,在模型预测情感时常常结果完全相反: ![image](https://user-images.githubusercontent.com/76596358/163294029-601c65f3-da50-4ec4-a8c0-a63dcdfb3be4.png) 正向的句子argmax后结果总是0(负向),而负向的结果总是1,然而我的数据集正向的样本远多于负向。 列举几个改动: 数据标注: ![image](https://user-images.githubusercontent.com/76596358/163294215-47a7fea2-3dd5-4d73-bd55-a056e73477b4.png) 同中文的几个数据集,做0/2标注。训练测试八二分,训练行数33w行左右。 模型保存与加载: ![image](https://user-images.githubusercontent.com/76596358/163294497-4762743b-5c5e-4765-b65d-216fb1789fa4.png) ![image](https://user-images.githubusercontent.com/76596358/163294568-a7f89167-9b0c-46d3-bdde-792d7d7a719d.png) 预测输出: ![image](https://user-images.githubusercontent.com/76596358/163295479-bf9798ef-f63a-45bb-89fe-1828ac08e0c3.png) 在用公开数据集的时候,预测结果比较正常,然而换成我自己的数据集,情感预测非常不正常。困扰了几天,没想出来是哪里出了问题,请教您一下。

你好,兄弟。数据集极性为什么会有 {0,-1, 1, 2}这四种类别,我想知道分别表示什么? 还有为什么 非aspect的单词也有极性标签标注?感谢你的回答。

File "D:\my_python\LCF-ATEPC\utils\Pytorch_GPUManager.py", line 33, in to_numberic = lambda v: float(v.upper().strip("\t").replace('MIB', '').replace('W', '')) # 带单位字符串去掉单位 ValueError: could not convert string to float: ' [N/A]'

你好,请问有没有预测的代码

Hello the loss function in this work for multi-task learning (loss = loss_ate + loss_apc) if I want to find just aspect polarity using SPC , should I change the...

Hello in the restaurant dataset, I find 4 tags for B-ASP and I-ASP I understand that 0 is negative, -1 is natural, 2 is positive what about 1 such as...

Hello Please, I need to print recall and precision in the output. also get the confusion matrix could help me, please?

模型在预测时候,对于方面Aspect term的抽取效果很好,但是对于语句话如果有多个aspect,且他们情感极性相反时,他的预测情感极性就会出错。我实验了很多次。比如我输入的测试句子明显带有正负向的情感倾向,如'the staff was so nice to us ,But the service was bad ',他返回的一直都是同一个情感倾向: 'aspect': ['staff', 'service'] ;'sentiment': ['Negative', 'Negative']。除了我自己训练的模型,你们预训练好的模型我也试过,针对这种情况很难预测出一句话包含多个aspect且有相反情感极性的句子。请问这个问题如何解决。 ![AA](https://user-images.githubusercontent.com/45477447/145996456-8cdbf6d2-d770-49c8-9cfa-6b2347a88dcb.png)

Hi, thank u for sharing ur code with us. As I understand, the results of APC are affected by those of AE aren t they ? you use the extracted...