chimera
chimera copied to clipboard
Problem loading the data.vocab.pt
Hi! I'm having trouble loading the data checkpoints.
The specific output and error are:
WebNLG Pre-process training data Training Set corpus Read Corpus 0:00:00.000065 graphify RDF to Graph 0:00:00.000024 spelling Fix Spelling 0:00:00.000018 entities Describe entities 0:00:00.000025 match-ents Match Entities 0:00:00.000042 match-plans Match Plans 0:00:00.000020 tokenize Tokenize Plans & Sentences 0:00:00.000017 to-json Export in a readable format 0:00:00.000022 Dev Set corpus Read Corpus 0:00:00.000042 graphify RDF to Graph 0:00:00.000019 spelling Fix Spelling 0:00:00.000019 entities Describe entities 0:00:00.000020 match-ents Match Entities 0:00:00.000020 match-plans Match Plans 0:00:00.000019 tokenize Tokenize Plans & Sentences 0:00:00.000018 to-json Export in a readable format 0:00:00.000016 Train Planner planner Learn planner 0:00:00.000036 Train Model model Initialize OpenNMT 0:00:00.000034 expose Expose Train Data 0:00:00.000017 pre-process Pre-process Train and Dev 0:00:00.000018 train Train model EXEC /home/ubuntu/miniconda3/envs/env_pytorch/bin/python /home/ubuntu/chimera/model/../libs/OpenNMT/train.py -train_steps 30000 -save_checkpoint_steps 1000 -batch_size 16 -word_vec_size 300 -feat_vec_size 10 -feat_merge concat -layers 3 -copy_attn -position_encoding -data /tmp/tmpz79gzk1g/data -save_model /tmp/tmpo6y8o6dz/ -world_size 1 -gpu_ranks 0
------------------------------------------------ (so the error happened here) -------------------------------------------------------
Traceback (most recent call last):
File "/home/ubuntu/chimera/model/../libs/OpenNMT/train.py", line 109, in
--------------------------------------- (the error is above)-------------------------------------
Could you kindly provide some help? I would be very grateful!
did it successfully create a vocabulary file in /tmp/tmpz79gzk1g/data?
If there is no persistent tmp storage on your sever, perhaps you should export TMPDIR to some other directory.
Yes, the vocabulary file is created in /tmp. The files are below:
Also, inside the data.vocab.pt:
