graph4nlp
graph4nlp copied to clipboard
Attribute Error in running IEBasedGraphConstruction.topology
❓ Questions and Help
I am trying to run the following example from the documentation (https://graph4ai.github.io/graph4nlp/guide/construction/iegraphconstruction.html), but I am getting an attribute error. I was wondering what is the correct version of running "IEBasedGraphConstruction" to construct a graph?
raw_data = ('James is on shop. He buys eggs.')
nlp_parser = StanfordCoreNLP('http://localhost', port=9000, timeout=300000)
props_coref = {
'annotators': 'tokenize, ssplit, pos, lemma, ner, parse, coref',
"tokenize.options":
"splitHyphenated=true,normalizeParentheses=true,normalizeOtherBrackets=true",
"tokenize.whitespace": False,
'ssplit.isOneSentence': False,
'outputFormat': 'json'
}
props_openie = {
'annotators': 'tokenize, ssplit, pos, ner, parse, openie',
"tokenize.options":
"splitHyphenated=true,normalizeParentheses=true,normalizeOtherBrackets=true",
"tokenize.whitespace": False,
'ssplit.isOneSentence': False,
'outputFormat': 'json',
"openie.triple.strict": "true"
}
processor_args = [props_coref, props_openie]
graphdata = IEBasedGraphConstruction.topology(raw_data, nlp_parser,
processor_args=processor_args,
merge_strategy=None,
edge_strategy=None)
print(graphdata)
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [8], in <cell line: 26>()
14 props_openie = {
15 'annotators': 'tokenize, ssplit, pos, ner, parse, openie',
16 "tokenize.options":
(...)
21 "openie.triple.strict": "true"
22 }
24 processor_args = [props_coref, props_openie]
---> 26 graphdata = ConstituencyBasedGraphConstruction.topology(raw_data, nlp_parser,
27 processor_args=processor_args,
28 merge_strategy=None,
29 edge_strategy=None)
30 print(graphdata)
AttributeError: type object 'ConstituencyBasedGraphConstruction' has no attribute 'topology'
Thank you
We are very sorry that we did not update the user guidelines accordingly during the version upgrade, and we will address this issue in the next version.