GSCNN
GSCNN copied to clipboard
Error in DualTaskLoss while running the evaluation
Hi Coders,
I am trying to run the evaluation on the cityscapes databases as discussed in the README . I am using 2 GPUs, did all the setups and downloaded the cityscapes datasets as required by the setup.
After crossing the basic steps, now its failing in the DuakTaskLoss. I am pasting the error for your reference. Please help me with the issue. Thanks in advance.
=====================================error Log=============================================
shabnamenv) root@shabnam2-gpu:/temp/WORKSPACE/GSCNN# python train.py --evaluate --snapshot checkpoints/best_cityscapes_checkpoint.pth
/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/setuptools/distutils_patch.py:25: UserWarning: Distutils was imported before Setuptools. This usage is discouraged and may exhibit undesirable behaviors or errors. Please use Setuptools' objects directly or at least import Setuptools first.
warnings.warn(
08-14 16:05:36.060 train fine cities: ['train/aachen', 'train/bochum', 'train/bremen', 'train/cologne', 'train/darmstadt', 'train/dusseldorf', 'train/erfurt', 'train/hamburg', 'train/hanover', 'train/jena', 'train/krefeld', 'train/monchengladbach', 'train/strasbourg', 'train/stuttgart', 'train/tubingen', 'train/ulm', 'train/weimar', 'train/zurich']
08-14 16:05:36.071 Cityscapes-train: 2975 images
08-14 16:05:36.071 val fine cities: ['val/frankfurt', 'val/munster', 'val/lindau']
08-14 16:05:36.073 Cityscapes-val: 500 images
08-14 16:05:36.073 Using Per Image based weighted loss
/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/modules/loss.py:217: UserWarning: NLLLoss2d has been deprecated. Please use NLLLoss instead as a drop-in replacement and see https://pytorch.org/docs/master/nn.html#torch.nn.NLLLoss for more details.
warnings.warn("NLLLoss2d has been deprecated. "
/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/reduction.py:44: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
warnings.warn(warning.format(ret))
08-14 16:05:36.074 Using Cross Entropy Loss
/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/encoding/nn/syncbn.py:228: EncodingDeprecationWarning: encoding.nn.BatchNorm2d is now deprecated in favor of encoding.nn.SyncBatchNorm.
warnings.warn("encoding.nn.{} is now deprecated in favor of encoding.nn.{}."
/temp/WORKSPACE/GSCNN/network/mynn.py:29: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal.
nn.init.kaiming_normal(module.weight)
08-14 16:05:37.111 Model params = 137.3M
08-14 16:05:39.992 Loading weights from model checkpoints/best_cityscapes_checkpoint.pth
08-14 16:05:40.656 Load Compelete
/root/SETUPS/anaconda3/envs/shabnamenv/lib/python3.8/site-packages/torch/nn/functional.py:3118: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
warnings.warn("Default upsampling behavior when mode={} is changed "
/temp/WORKSPACE/GSCNN/loss.py:160: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
return self.nll_loss(F.log_softmax(inputs), targets)
Traceback (most recent call last):
File "train.py", line 381, in
Thanks, Shabnam
just to as the massage suggests -
replace g_hat = g_hat.view(N, -1)
with g_hat = g_hat.reshape(N, -1)