GSCNN icon indicating copy to clipboard operation
GSCNN copied to clipboard

Error in DualTaskLoss while running the evaluation

Open shabnambanu1983 opened this issue 4 years ago • 1 comments

Hi Coders,

    I am trying to run the evaluation on the cityscapes databases as discussed in the README  . I am using 2 GPUs, did all the setups and downloaded the cityscapes datasets as required by the setup.

 After crossing the basic steps, now its failing in the DuakTaskLoss. I am pasting the error for your reference. Please help me with the issue. Thanks in advance.

=====================================error Log=============================================

 shabnamenv) root@shabnam2-gpu:/temp/WORKSPACE/GSCNN# python train.py --evaluate --snapshot checkpoints/best_cityscapes_checkpoint.pth

/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/setuptools/distutils_patch.py:25: UserWarning: Distutils was imported before Setuptools. This usage is discouraged and may exhibit undesirable behaviors or errors. Please use Setuptools' objects directly or at least import Setuptools first. warnings.warn( 08-14 16:05:36.060 train fine cities: ['train/aachen', 'train/bochum', 'train/bremen', 'train/cologne', 'train/darmstadt', 'train/dusseldorf', 'train/erfurt', 'train/hamburg', 'train/hanover', 'train/jena', 'train/krefeld', 'train/monchengladbach', 'train/strasbourg', 'train/stuttgart', 'train/tubingen', 'train/ulm', 'train/weimar', 'train/zurich'] 08-14 16:05:36.071 Cityscapes-train: 2975 images 08-14 16:05:36.071 val fine cities: ['val/frankfurt', 'val/munster', 'val/lindau'] 08-14 16:05:36.073 Cityscapes-val: 500 images 08-14 16:05:36.073 Using Per Image based weighted loss /root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/modules/loss.py:217: UserWarning: NLLLoss2d has been deprecated. Please use NLLLoss instead as a drop-in replacement and see https://pytorch.org/docs/master/nn.html#torch.nn.NLLLoss for more details. warnings.warn("NLLLoss2d has been deprecated. " /root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/reduction.py:44: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead. warnings.warn(warning.format(ret)) 08-14 16:05:36.074 Using Cross Entropy Loss /root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/encoding/nn/syncbn.py:228: EncodingDeprecationWarning: encoding.nn.BatchNorm2d is now deprecated in favor of encoding.nn.SyncBatchNorm. warnings.warn("encoding.nn.{} is now deprecated in favor of encoding.nn.{}." /temp/WORKSPACE/GSCNN/network/mynn.py:29: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal. nn.init.kaiming_normal(module.weight) 08-14 16:05:37.111 Model params = 137.3M 08-14 16:05:39.992 Loading weights from model checkpoints/best_cityscapes_checkpoint.pth 08-14 16:05:40.656 Load Compelete /root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/functional.py:3118: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /temp/WORKSPACE/GSCNN/loss.py:160: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument. return self.nll_loss(F.log_softmax(inputs), targets) Traceback (most recent call last): File "train.py", line 381, in main() File "train.py", line 140, in main validate(val_loader, net, criterion_val, File "train.py", line 303, in validate loss_dict = criterion((seg_out, edge_out), (mask_cuda, edge_cuda)) File "/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/temp/WORKSPACE/GSCNN/loss.py", line 109, in forward losses['dual_loss'] = self.dual_weight * self.dual_task(segin, segmask) File "/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/temp/WORKSPACE/GSCNN/my_functionals/DualTaskLoss.py", line 120, in forward g_hat = g_hat.view(N, -1) RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

Thanks, Shabnam

shabnambanu1983 avatar Aug 14 '20 16:08 shabnambanu1983

just to as the massage suggests - replace g_hat = g_hat.view(N, -1) with g_hat = g_hat.reshape(N, -1)

iariav avatar Sep 08 '20 13:09 iariav