Chinese-Chatbot-PyTorch-Implementation icon indicating copy to clipboard operation
Chinese-Chatbot-PyTorch-Implementation copied to clipboard

聊天崩溃

Open Chenjm08 opened this issue 2 years ago • 6 comments

运行以下命令,出现崩溃,该怎么解决?

python3 main.py chat
Doragd > 你在 干嘛

崩溃信息:

Traceback (most recent call last):
  File "main.py", line 38, in <module>
    fire.Fire()
  File "/home/chenjm/.local/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/chenjm/.local/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/chenjm/.local/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "main.py", line 28, in chat
    output_words = train_eval.output_answer(input_sentence, searcher, sos, eos, unknown, opt, word2ix, ix2word)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/train_eval.py", line 291, in output_answer
    tokens = generate(input_seq, searcher, sos, eos, opt)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/train_eval.py", line 202, in generate
    tokens, scores = searcher(sos, eos, input_batch, input_lengths, opt.max_generate_length, opt.device)
  File "/home/chenjm/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/utils/greedysearch.py", line 17, in forward
    encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)
  File "/home/chenjm/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/chenjm/chat-ai/Chinese-Chatbot-PyTorch-Implementation/model.py", line 51, in forward
    packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
  File "/home/chenjm/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py", line 262, in pack_padded_sequence
    _VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

Chenjm08 avatar Feb 28 '23 09:02 Chenjm08

要么换torch版本,要么把lengths改成lengths.cpu()

666github100 avatar Apr 12 '23 01:04 666github100

要么换torch版本,要么把lengths改成lengths.cpu()

torch为最新版本,lengths.to(opt.device)改了之后会触发另一个runtime error

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

anfogy avatar Jun 06 '23 07:06 anfogy

do you have any solutions to solve the problem?

Whylickspittle avatar Jun 15 '23 08:06 Whylickspittle

do you have any solutions to solve the problem?

model.py
Line 51 packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) just edited packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths.cpu())

Whylickspittle avatar Jun 15 '23 08:06 Whylickspittle

do you have any solutions to solve the problem?

model.py Line 51 packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) just edited packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths.cpu())

Woo, haven't check it but thank you!

anfogy avatar Jun 15 '23 08:06 anfogy

我运行main.py之后没有进入对话模式, 出现os: <module 'os' from '地址' preprocess: <function preprocess at 0x000001C8EE6B57B8> train_eval: <module 'train_eval' from'地址' fire: <module 'fire' from'地址' QA_test: <module 'QA_data.QA_test' from'地址' Config: <class 'config.Config'> chat: <function chat at 0x000001C8EDFA2EA0> 求解决

pinst7 avatar Dec 06 '23 05:12 pinst7