Global-Encoding icon indicating copy to clipboard operation
Global-Encoding copied to clipboard

Some trouble when running your code

Open MrZhengXin opened this issue 6 years ago • 9 comments

“Traceback (most recent call last): File "train.py", line 332, in main() File "train.py", line 324, in main train_model(model, data, optim, i, params) File "train.py", line 179, in train_model score = eval_model(model, data, params) File "train.py", line 252, in eval_model score[metric] = getattr(utils, metric)(reference, candidate, params['log_path'], params['log'], config) File "/home/zhengxin/Global-Encoding/Global-Encoding-master/utils/metrics.py", line 58, in rouge rouge_results = r.convert_and_evaluate() File "/root/miniconda3/lib/python3.7/site-packages/pyrouge/Rouge155.py", line 361, in convert_and_evaluate rouge_output = self.evaluate(system_id, rouge_args) File "/root/miniconda3/lib/python3.7/site-packages/pyrouge/Rouge155.py", line 336, in evaluate rouge_output = check_output(command).decode("UTF-8") File "/root/miniconda3/lib/python3.7/subprocess.py", line 389, in check_output **kwargs).stdout File "/root/miniconda3/lib/python3.7/subprocess.py", line 466, in run with Popen(*popenargs, **kwargs) as process: File "/root/miniconda3/lib/python3.7/subprocess.py", line 769, in init restore_signals, start_new_session) File "/root/miniconda3/lib/python3.7/subprocess.py", line 1516, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: 'RELEASE-1.5.5/ROUGE-1.5.5.pl'” Tried in several different machine, but get the same mistake, and I'm wondering the reason QwQ.

MrZhengXin avatar Dec 21 '18 01:12 MrZhengXin

您好,我想问一下数据的应该是什么样的呢?不是很懂作者说的这句话,是把txt换成src,tgt吗? Remember to put the data into a folder and name them train.src, train.tgt, valid.src, valid.tgt, test.src and test.tgt, and make a new folder inside called data

Zierzzz avatar Jan 06 '19 02:01 Zierzzz

您好,我想问一下数据的应该是什么样的呢?不是很懂作者说的这句话,是把txt换成src,tgt吗? Remember to put the data into a folder and name them train.src, train.tgt, valid.src, valid.tgt, test.src and test.tgt, and make a new folder inside called data

是的,或者你可以以看一下preprocess.py line 27和 line 29,加上后缀参数

MrZhengXin avatar Jan 06 '19 02:01 MrZhengXin

23333这个问题之前已经解决了,chmod 777 ROUGE-1.5.5.pl就行

MrZhengXin avatar Jan 06 '19 02:01 MrZhengXin

那请问可以把你处理好的saved_data传到我的邮箱吗?[email protected] ,我按着作者所说的处理方式在train的时候报错RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0,另外model里面是不是不需要loss了,我看之前的版本有最近更新之后就没有loss了

angeluau avatar Feb 19 '19 09:02 angeluau

那请问可以把你处理好的saved_data传到我的邮箱吗?[email protected] ,我按着作者所说的处理方式在train的时候报错RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0,另外model里面是不是不需要loss了,我看之前的版本有最近更新之后就没有loss了

对,新版本的loss写进模型里了,不需要调单独的loss。你可以检查下是不是数据里面有空行

JustinLin610 avatar Feb 25 '19 17:02 JustinLin610

Traceback (most recent call last): File "train.py", line 322, in main() File "train.py", line 314, in main train_model(model, data, optim, i, params) File "train.py", line 161, in train_model raise e File "train.py", line 141, in train_model loss, outputs = model(src, lengths, dec, targets) File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/下载/Global-Encoding-master/models/seq2seq.py", line 40, in forward contexts, state = self.encoder(src, src_len.tolist()) File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/下载/Global-Encoding-master/models/rnn.py", line 52, in forward embs = pack(self.embedding(inputs), lengths) File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 148, in pack_padded_sequence return PackedSequence(torch._C._VariableFunctions._pack_padded_sequence(input, lengths, batch_first)) RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0 在train时,报错,请问这个是需要改mini-batch大小吗?

huoguo-wan avatar Mar 23 '19 07:03 huoguo-wan

Traceback (most recent call last): File "train.py", line 322, in main() File "train.py", line 314, in main train_model(model, data, optim, i, params) File "train.py", line 119, in train_model for src, tgt, src_len, tgt_len, original_src, original_tgt in trainloader: File "/home/mu/.conda/envs/dym/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 314, in next batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/mu/global-encoding/utils/data_helper.py", line 84, in padding src_pad = torch.zeros(len(src), max(src_len)).long() RuntimeError: sizes must be non-negative 在train时报错,请问这是什么报错

DaTtoYM avatar May 25 '19 17:05 DaTtoYM

Traceback (most recent call last): File "train.py", line 322, in main() File "train.py", line 314, in main train_model(model, data, optim, i, params) File "train.py", line 161, in train_model raise e File "train.py", line 141, in train_model loss, outputs = model(src, lengths, dec, targets) File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/下载/Global-Encoding-master/models/seq2seq.py", line 40, in forward contexts, state = self.encoder(src, src_len.tolist()) File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/下载/Global-Encoding-master/models/rnn.py", line 52, in forward embs = pack(self.embedding(inputs), lengths) File "/home/anaconda3/lib/python3.7/site-packages/torch/nn/utils/rnn.py", line 148, in pack_padded_sequence return PackedSequence(torch._C._VariableFunctions._pack_padded_sequence(input, lengths, batch_first)) RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0 在train时,报错,请问这个是需要改mini-batch大小吗?

是不是里面有空行?

JustinLin610 avatar May 30 '19 13:05 JustinLin610

Traceback (most recent call last): File "train.py", line 322, in main() File "train.py", line 314, in main train_model(model, data, optim, i, params) File "train.py", line 119, in train_model for src, tgt, src_len, tgt_len, original_src, original_tgt in trainloader: File "/home/mu/.conda/envs/dym/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 314, in next batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/mu/global-encoding/utils/data_helper.py", line 84, in padding src_pad = torch.zeros(len(src), max(src_len)).long() RuntimeError: sizes must be non-negative 在train时报错,请问这是什么报错

有可能是空行的问题诶

JustinLin610 avatar May 30 '19 13:05 JustinLin610