pytorch-crf
pytorch-crf copied to clipboard
Issue about the backward()
When I Calculate the loss using this moddle,the error is RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
. But When I Use CrossEntropyLoss it runs successfully.My code :
lossF = CRF(num_tags=num_tags, batch_first=True)
for batch in tqdm(trainloader, desc=f"Training Epoch {epoch}"):
optimizer.zero_grad()
inputs, targets = [x.to(device) for x in batch]
if use_crf:
targets = targets
else:
targets = targets.view(-1)
if use_crf:
bert_output = model(inputs, use_crf=use_crf)
loss = lossF(bert_output, targets)
loss.backward()
I'm guessing you have in-place operations inside your model forward()
. That error is usually caused by such operations.