Fast-Pytorch icon indicating copy to clipboard operation
Fast-Pytorch copied to clipboard

'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

Open kayanfong opened this issue 3 years ago • 0 comments

Hi there! I am a newbie and encountered the runtime error showing "'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor". I've searched online but I am too new to understand how I should alter the codes. If you come across to this question, please help 👍

#Configure training/optimization clip = 50.0 teacher_forcing_ratio = 1.0 learning_rate = 0.0001 decoder_learning_ratio = 5.0 n_iteration = 4000 print_every = 1 save_every = 500

Ensure dropout layers are in train mode

encoder.train() decoder.train()

Initialize optimizers

print('Building optimizers ...') encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio) if loadFilename: encoder_optimizer.load_state_dict(encoder_optimizer_sd) decoder_optimizer.load_state_dict(decoder_optimizer_sd)

Run training iterations

print("Starting Training!") trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename)

kayanfong avatar Feb 14 '22 02:02 kayanfong