language-models-are-knowledge-graphs-pytorch
language-models-are-knowledge-graphs-pytorch copied to clipboard
Using GPT as shown in the paper
I am interested in using GPT as the main model as BERT is not performing too well.
Has it been implemented? Otherwise I am ready to implement it myself. Please let me know
Oh I found it here. Turns out the choices are:
'bert-large-uncased', 'bert-large-cased', 'bert-base-uncased', 'bert-base-cased', 'gpt2', 'gpt2-medium', 'gpt2-large', 'gpt2-xl'
I attempted to run it with the gpt2-xl parameters, however I got the following error: Traceback (most recent call last): File "/usr/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/content/language-models-are-knowledge-graphs-pytorch/process.py", line 29, in bfs return BFS(s, end, graph, max_size, black_list_relation) File "/content/language-models-are-knowledge-graphs-pytorch/utils.py", line 19, in BFS visited = [False] * (max(graph.keys())+100) ValueError: max() arg is an empty sequence """
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "extract.py", line 74, in
Also tried running it with just gpt2-large, same error and empty output file
got this error too