knowledge-graph-learning
knowledge-graph-learning copied to clipboard
NAACL-2019-Old is Gold: Linguistic Driven Approach for Entity and Relation Linking of Short Text
一句话总结:
一篇偏工程的文章,针对entity linking的问题,提出了一个叫做Falcon的架构,能将短文本中的entity和relation进行映射,找出背景知识图谱中的mentions
问题:短文本在NER,RE,EL等方面的挑战是因为短文本没有提供足够的context,或者说这些context是片面的 提案:提出的Falcon能有效将短文本中的entities和relations进行映射 具体做法:joint entity and relation linking of a short text by leveraging several fundamental principles of English morphology, and utilizes an extended knowledge graph created by merging entities and relations from various knowledge sources 效果:
资源:
论文信息:
- Author: L3S Research Center,Hannover, Germany
- Dataset:
- keywords:
笔记:
Entity Linking (EL) 有两个子任务, Named Entity Recognition and Disambiguation (NER and NED) tasks, NER和NED. 主要是利用Wikipedia或DBpedia这样现成的KB,通过EL直接找到问句的答案。比如下面的例子里,一个NED工具要能在辨别问题里具体的entity,然后与DBpedia中的entity进行link。(e.g. ‘Pillars of The Earth’ to dbr:The_Pillars_of_the_Earth
)
‘Who wrote the book The Pillars of The Earth?’
另一个重要的NLP task是 relation linking; 将两个相同的relation联系起来。上面的例子里,一个 relation linking (RL) 工具应该将 ‘wrote’ 连接到 dbo:author2。
但是这些task在短文本上经常会失败。因为短文本缺乏足够的context, 这对于disambiguation process (消歧)来说很致命。还有一个更重要的原因,短文本通常有格式问题,比如文本是s incomplete, inexpressive, or implicit, 这对于relation很致命。
所以我们提出了方案, jointly linking entities and relations within a short text into the entities and relations of DBpedia KG。不仅健壮,且高效。
模型图:
(主要是EL and RL)
结果:
- LC-QuAD (Trivedi et al., 2017) dataset comprises 5,000 complex questions for DBpedia
- QALD-7 (Usbeck et al., 2017) is the most popular benchmarking dataset for QA over DBpedia comprising 215 questions
接下来要看的论文: