CenterPillarNet
CenterPillarNet copied to clipboard
An anchor free method for pointcloud object detecion.
CenterPillarNet
An anchor free method for pointcloud object detecion.
Result

Introdcution
This is an anchor free method for pointcloud object detecion.
This project is not finished yet, it has a lot of parts to be improved.
If you are intreseted in this project, you can try to change the code and make this work better.
If you have any idea on this work, please contact me.
More details I will put it on wiki.
1.Clone Code
git clone https://github.com/wangx1996/CenterPillarNet.git CenterPillarNet
cd CenterPillarNet/
2.Install Dependence
2.1 base pacakge
pip install -r requirements.txt
for anaconda
conda install scikit-image scipy numba pillow matplotlib
pip install fire tensorboardX protobuf opencv-python
2.2 spconv
First download the code
git clone https://github.com/traveller59/spconv.git --recursive spconv
cd spconv
Build the code
python setup.py bdist_wheel
cd ./dist
pip install ***.whl
2.3 DCN
Please download DCNV2 from https://github.com/jinfagang/DCNv2_latest to fit torch 1.
Put the file into
./src/model/
then
./make.sh
2.4 Setup cuda for numba
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
3. Prepaer data
KITTI dataset
You can Download the KITTI 3D object detection dataset from here.
It includes: Velodyne point clouds (29 GB)
Training labels of object data set (5 MB)
Camera calibration matrices of object data set (16 MB)
Left color images of object data set (12 GB)
Data structure like
└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| └── velodyne
└── testing <-- 7580 test data
| ├── image_2 <-- for visualization
| ├── calib
| └── velodyne
└── ImageSets
├── train.txt
├── val.txt
└── test.txt
4. How to Use
First, make sure the dataset dir is right in your train.py file
Then run
python train.py --gpu_idx 0 --arch dla_34 --saved_fn cpdla --batch_size 1
Tensorboard
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./
Actually, I only have one RTX2070, so the batch_size must be one, but if you have morce GPUs, you can try other number of batchsize.
if you want to test the work
python test.py --gpu_idx 0 --arch dla_34 --pretrained_paht ../checkpoints/**/**
if you want to evaluate the work
python evaluate.py --gpu_idx 0 --arch dla_34 --pretrained_paht ../checkpoints/**/**
also you can choose another method to evaluate the work:
first you need to run
python evaluatefiles.py --gpu_idx 0 --arch dla_34 --pretrained_paht ../checkpoints/**/**
then you can use this project to eval.
Reference
Thanks for all the great works.
[1] SFA3D
[2] CenterNet: Objects as Points, [PyTorch Implementation]
[3] PointPillars: Fast Encoders for Object Detection from Point Clouds,[PyTorch Implementation]
[4] Deformable Convolutional Networks [final version code]
Inspired by
[1] AFDet: Anchor Free One Stage 3D Object Detection
CheckPoint
GoogleDrive: https://drive.google.com/drive/folders/1Iobh8OiWvytPvK_u2TOtEtgUTIn3r6Hz?usp=sharing
More
Evaluate:peak_thresh=0.5
Car AP(Average Precision)@0.70, 0.70, 0.70:
bbox AP:78.04, 73.71, 66.88
bev AP:79.25, 73.67, 66.84
3d AP:60.75, 55.75, 51.03
Car AP(Average Precision)@0.70, 0.50, 0.50:
bbox AP:78.04, 73.71, 66.88
bev AP:82.64, 77.12, 69.38
3d AP:82.31, 76.68, 69.07
You can see the 3d size is not perform very well.
You can also show the 3d pointcloud from the test code

More results
