pytorch-lanenet
pytorch-lanenet copied to clipboard
Discriminative loss error: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Hi,
When running python3 lanenet/train.py --dataset ./data/training_data_example
, I'm seeing the following exception:
Traceback (most recent call last):
File "lanenet/train.py", line 156, in <module>
main()
File "lanenet/train.py", line 144, in main
train_iou = train(train_loader, model, optimizer, epoch)
File "lanenet/train.py", line 68, in train
total_loss, binary_loss, instance_loss, out, train_iou = compute_loss(net_output, binary_label, instance_label)
File "/usr/local/lib/python3.6/dist-packages/lanenet-0.1.0-py3.6.egg/lanenet/model/model.py", line 75, in compute_loss
File "/home/lashar/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/lanenet-0.1.0-py3.6.egg/lanenet/model/loss.py", line 33, in forward
File "/usr/local/lib/python3.6/dist-packages/lanenet-0.1.0-py3.6.egg/lanenet/model/loss.py", line 71, in _discriminative_loss
File "/home/lashar/.local/lib/python3.6/site-packages/torch/functional.py", line 1100, in norm
return _VF.frobenius_norm(input, _dim, keepdim=keepdim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Any clues about how I can fix this? Not sure if I'm doing something incorrectly.
Thanks!
PS: I just saw issues/12, and its related commit. Changing it back to dim=0
makes it work. However, since the aforementioned issue says that the correct value is dim=1
, not sure if this works as intended? Would appreciate it if clarification could be provided!
Thanks!
I'm having the same problem. Reverting back to dim=0
resolves the issue.
PS: I just saw issues/12, and its related commit. Changing it back to
dim=0
makes it work. However, since the aforementioned issue says that the correct value isdim=1
, not sure if this works as intended? Would appreciate it if clarification could be provided!Thanks!
Hi, I am running your code, and I encountered the same problem. I tried both dim=0 and dim=1, but it doesn't work.
Have been there any updates after your last commit? Please let me know how to fix it and run the train! Thank you.
I also tried both dim=0 and dim=1, both not working. @klintan would like to know any updates on this? thanks.
Managed to fix it. It should be dim=1. The norm should be calculated along the "embedding" axis. Problem comes from
embedding_i = embedding_b[seg_mask_i]
which break the dims to get a single dimensional vector.
Changing it to: embedding_I = embedding_b * seg_mask_i
works for me
@mummy2358 Thank you. Will give it a try.
@mummy2358 awesome thanks! Feel free to add a PR for the fix :)
@mummy2358 I do not know why, on my side, after changing to embedding_i = embedding_b * seg_mask_i
I still have the same problem. Not sure if it's computer related, as I tried on another pc, it works. And on that pc, revert to dim=0
and w/o aforementioned change, also works.
May i knw where exactly the posotion of dim?
@mummy2358 I do not know why, on my side, after changing to
embedding_i = embedding_b * seg_mask_i
I still have the same problem. Not sure if it's computer related, as I tried on another pc, it works. And on that pc, revert todim=0
and w/o aforementioned change, also works.
hey @mengnutonomy, it happened to me also. I don't know if this does anything, but it seems you have to run 'python setup.py install' everytime you change your coding. Hope it'll work :)
@mummy2358 I do not know why, on my side, after changing to
embedding_i = embedding_b * seg_mask_i
I still have the same problem. Not sure if it's computer related, as I tried on another pc, it works. And on that pc, revert todim=0
and w/o aforementioned change, also works.hey @mengnutonomy, it happened to me also. I don't know if this does anything, but it seems you have to run 'python setup.py install' everytime you change your coding. Hope it'll work :)
thx a lot ! rerun works for me
if len(embedding_i.shape)==1:
embedding_i=torch.unsqueeze(embedding_i,0)