AttributeError: 'Mask2FormerHead' object has no attribute 'threshold'
06/17 11:25:03 - mmengine - INFO - Iter(train) [ 50/90000] base_lr: 9.9951e-05 lr: 9.9951e-06 eta: 1 day, 1:16:30 time: 0.9964 data_time: 0.0160 memory: 20881 grad_norm: 2.4602 loss: 1.3251 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 1.3249 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0001 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:25:52 - mmengine - INFO - Iter(train) [ 100/90000] base_lr: 9.9901e-05 lr: 9.9901e-06 eta: 1 day, 1:03:36 time: 0.9953 data_time: 0.0158 memory: 19056 grad_norm: 2.1069 loss: 1.0589 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 1.0588 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0001 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:26:42 - mmengine - INFO - Iter(train) [ 150/90000] base_lr: 9.9851e-05 lr: 9.9851e-06 eta: 1 day, 0:58:56 time: 0.9952 data_time: 0.0152 memory: 19056 grad_norm: 1.7905 loss: 0.8241 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.8240 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:27:32 - mmengine - INFO - Iter(train) [ 200/90000] base_lr: 9.9801e-05 lr: 9.9801e-06 eta: 1 day, 0:56:24 time: 0.9968 data_time: 0.0156 memory: 19056 grad_norm: 1.4811 loss: 0.6155 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.6155 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:28:24 - mmengine - INFO - Iter(train) [ 250/90000] base_lr: 9.9751e-05 lr: 9.9751e-06 eta: 1 day, 1:10:02 time: 1.0503 data_time: 0.0162 memory: 19056 grad_norm: 1.1710 loss: 0.4363 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.4363 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:29:14 - mmengine - INFO - Iter(train) [ 300/90000] base_lr: 9.9701e-05 lr: 9.9701e-06 eta: 1 day, 1:06:42 time: 0.9976 data_time: 0.0153 memory: 19056 grad_norm: 0.8709 loss: 0.2910 decode.loss_cls: 0.0000 decode.loss_mask: 0.0000 decode.loss_dice: 0.0000 decode.d0.loss_cls: 0.2910 decode.d0.loss_mask: 0.0000 decode.d0.loss_dice: 0.0000 decode.d1.loss_cls: 0.0000 decode.d1.loss_mask: 0.0000 decode.d1.loss_dice: 0.0000 decode.d2.loss_cls: 0.0000 decode.d2.loss_mask: 0.0000 decode.d2.loss_dice: 0.0000 decode.d3.loss_cls: 0.0000 decode.d3.loss_mask: 0.0000 decode.d3.loss_dice: 0.0000 decode.d4.loss_cls: 0.0000 decode.d4.loss_mask: 0.0000 decode.d4.loss_dice: 0.0000 decode.d5.loss_cls: 0.0000 decode.d5.loss_mask: 0.0000 decode.d5.loss_dice: 0.0000 decode.d6.loss_cls: 0.0000 decode.d6.loss_mask: 0.0000 decode.d6.loss_dice: 0.0000 decode.d7.loss_cls: 0.0000 decode.d7.loss_mask: 0.0000 decode.d7.loss_dice: 0.0000 decode.d8.loss_cls: 0.0000 decode.d8.loss_mask: 0.0000 decode.d8.loss_dice: 0.0000
06/17 11:29:15 - mmengine - INFO - Saving checkpoint at 300 iterations
/ProjectRoot/openmmlab/lib/python3.8/site-packages/mmdet/models/layers/positional_encoding.py:103: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats)
Traceback (most recent call last):
File "tools/train.py", line 104, in
my_custom_dataset:
Copyright (c) OpenMMLab. All rights reserved.
from mmseg.registry import DATASETS from .basesegdataset import BaseSegDataset
@DATASETS.register_module() class PaintingDataset(BaseSegDataset): """Painting dataset.
The ``img_suffix`` is fixed to '_leftImg8bit.png' and ``seg_map_suffix`` is
fixed to '_gtFine_labelTrainIds.png' for Cityscapes dataset.
"""
METAINFO = dict(
# classes=('background', 'master'),
# palette=[[0, 0, 0], [255, 255, 255]])
classes=('background', 'master'),
palette=[[0, 0, 0], [255, 255, 255]])
def __init__(self,
img_suffix='.jpeg',
seg_map_suffix='.png',
reduce_zero_label=True,
**kwargs) -> None:
super().__init__(
img_suffix=img_suffix,
seg_map_suffix=seg_map_suffix,
reduce_zero_label=reduce_zero_label,
**kwargs)
I only have two class, one background and one foreground. Some of my images only have foreground and no background, so if I set num_class=2, reduce_zero_label=False, my val result is nan. So I set num_class=1, reduce_zero_label=True, But I encountered this problem. The decode_head of mask2Former is not inherited from decode_head, but referenced from mmdet, so what should I do? Please give me some suggestions. Or how to make my result not nan when num_class=2
do you have solve this problem?
Do you have solved this problem?