ShapeConv
ShapeConv copied to clipboard
ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation (ICCV 2021)
ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation
The official implementation of Shape-aware Convolutional Layer.
ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation
Jinming Cao, Hanchao Leng, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li, ICCV 2021.
Introduction
We design a Shape-aware Convolutional(ShapeConv) layer to explicitly model the shape information for enhancing the RGB-D semantic segmentation accuracy. Specifically, we decompose the depth feature into a shape-component and a base-component, after which two learnable weights are introduced to handle the shape and base with differentiation.

Usage
Installation
- Requirements
- Linux
- Python 3.6+
- PyTorch 1.7.0 or higher
- CUDA 10.0 or higher
We have tested the following versions of OS and softwares:
- OS: Ubuntu 16.04.6 LTS
- CUDA: 10.0
- PyTorch 1.7.0
- Python 3.6.9
- Install dependencies.
pip install -r requirements.txt
Dataset
Download the offical dataset and convert to a format appropriate for this project. See here.
Or download the converted dataset:
Evaluation
-
Model
Download trained model and put it in folder
./model_zoo. See all trained models here. -
Config
Edit config file in
./configs. The config files in./configscorrespond to the model files in./model_zoo.- Set
inference.gpu_id = CUDA_VISIBLE_DEVICES.CUDA_VISIBLE_DEVICESis used to specify which GPUs should be visible to a CUDA application, e.g.,inference.gpu_id = "0,1,2,3". - Set
dataset_root = path_to_dataset.path_to_datasetrepresents the path of dataset. e.g.,dataset_root = "/home/shape_conv/nyu_v2".
- Set
-
Run
- Distributed evaluation, please run:
./tools/dist_test.sh config_path checkpoint_path gpu_numconfig_pathis path of config file;checkpoint_pathis path of model file;gpu_numis the number of GPUs used, note thatgpu_num <= len(inference.gpu_id).
E.g., evaluate shape-conv model on NYU-V2(40 categories), please run:
./tools/dist_test.sh configs/nyu/nyu40_deeplabv3plus_resnext101_shape.py model_zoo/nyu40_deeplabv3plus_resnext101_shape.pth 4- Non-distributed evaluation
python tools/test.py config_path checkpoint_path
Train
-
Config
Edit config file in
./configs.-
Set
inference.gpu_id = CUDA_VISIBLE_DEVICES.E.g.,
inference.gpu_id = "0,1,2,3". -
Set
dataset_root = path_to_dataset.E.g.,
dataset_root = "/home/shape_conv/nyu_v2".
-
-
Run
- Distributed training
./tools/dist_train.sh config_path gpu_numE.g., train shape-conv model on NYU-V2(40 categories) with 4 GPUs, please run:
./tools/dist_train.sh configs/nyu/nyu40_deeplabv3plus_resnext101_shape.py 4- Non-distributed training
python tools/train.py config_path
Result
For more result and pre-trained model, please see model zoo.
NYU-V2(40 categories)
| Architecture | Backbone | MS & Flip | Shape Conv | mIOU |
|---|---|---|---|---|
| DeepLabv3plus | ResNeXt-101 | False | False | 48.9% |
| DeepLabv3plus | ResNeXt-101 | False | True | 50.2% |
| DeepLabv3plus | ResNeXt-101 | True | False | 50.3% |
| DeepLabv3plus | ResNeXt-101 | True | True | 51.3% |
SUN-RGBD
| Architecture | Backbone | MS & Flip | Shape Conv | mIOU |
|---|---|---|---|---|
| DeepLabv3plus | ResNet-101 | False | False | 46.9% |
| DeepLabv3plus | ResNet-101 | False | True | 47.6% |
| DeepLabv3plus | ResNet-101 | True | False | 47.6% |
| DeepLabv3plus | ResNet-101 | True | True | 48.6% |
SID(Stanford Indoor Dataset)
| Architecture | Backbone | MS & Flip | Shape Conv | mIOU |
|---|---|---|---|---|
| DeepLabv3plus | ResNet-101 | False | False | 54.55% |
| DeepLabv3plus | ResNet-101 | False | True | 60.6% |
Citation
If you find this repo useful, please consider citing:
@article{cao2021shapeconv,
title={ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation},
author={Cao, Jinming and Leng, Hanchao and Lischinski, Dani and Cohen-Or, Danny and Tu, Changhe and Li, Yangyan},
journal={arXiv preprint arXiv:2108.10528},
year={2021}
}
Acknowledgments
This repository is heavily based on vedaseg.