semantic-segmentation-codebase
semantic-segmentation-codebase copied to clipboard
Codebase for semantic segmentation experiments
您好,我将SegmentationClassAug放在SegmentationClass同一路径下,并将lib/datasets/VOCDataset.py文件第31行的SegmentationClass修改成SegmentationClassAug,在运行train.py后一直卡在 0% | | 0/30000 [00:00
The deeplab v3 version does not train after loading the model. It's stuck at 0% forever as shown below: (deeplab-yude) u6617221@anaconda:~/Models/semantic-segmentation-codebase/experiment/deeplabv3+voc$ python train.py /students/u6617221/Models/semantic-segmentation-codebase/model/resnet101s-03a0f310.pth loaded. Use 6 GPU 0%| |...
Dear YudeWang, I have a question while running your code. I'm running the `experiment/seamv1_pseudovoc/ `code. Even though I changed the directory of 'ROOT_DIR', an error about the LocalFileSystem appears. ...
This is a great base work . I was learning how to use cam for segmentation training, and then found it. But I didn't find the function to generate pseudo...
Thanks for releasing your code, which has provided great help to me. However, I meet so error when I am training the model with xception as its backbone. I replace...
Hi, thanks for sharing this nice repository. However, I found that you comment the initialization for convolutional weights in deeplabv3plus.py. Do you deliberately comment this part since this setting can...
Hi, I cannot generate the model for both the deeplab3+ and seam experiments. For the seam's deeplabv1-ResNet38d, it always shows the following error message: The mxnet keeps raising the "get_last_ffi_error()"...
请问readme中restnet101为骨干网络的模型paper性能是80.22%,您实现了79.916%的性能,这个性能是经过‘冻结’,‘解冻’两个阶段得到的吗?还是说仅在冻结阶段训练,没有经过微调
Hello, I changed 'MODEL_NAME': 'deeplabv1' of config.py to 'deeplabv2' for training error: torch.nn.modules.module.ModuleAttributeError: 'deeplabv2' object has no attribute 'cfg' I guess deeplabv2 config.py file is different from deeplabv1, right? Can...