CURE
CURE copied to clipboard
torch.load() throws Segmentation fault (core dumped)
Dear author,
When I use python generator.py
to generate patches, I got the Segmentation fault (core dumped)
. I found it was caused by torch.load().
def generate_gpt_fconv(vocab_file, model_file, input_file, identifier_txt_file, identifier_token_file, output_file, beam_size):
dictionary = Dictionary(vocab_file, min_cnt=0)
print(len(dictionary))
loaded = torch.load(
model_file, map_location="cpu"
)
I tried map_location="cpu", map_location="cuda:0". My GPU is A5000, cuda==10.1, pytorch==1.4.0, python==3.8. The models I used are your pre-traned model.
Could you help me? Thanks in advance!
Here I found the problem:
Exception has occurred: ModuleNotFoundError
No module named 'transformers.configuration_openai'
File "/CURE/src/tester/generator.py", line 61, in generate_gpt_conut
loaded = torch.load(
File "/CURE/src/tester/generator.py", line 134, in <module>
generate_gpt_conut(vocab_file, model_file, input_file, identifier_txt_file, identifier_token_file, output_file, beam_size)
ModuleNotFoundError: No module named 'transformers.configuration_openai'
But I already used pip to install transformers=2.10.0, is there a problem with my installation?