Deep-3D-Obj-Recognition
Deep-3D-Obj-Recognition copied to clipboard
3D Object Recognition with Deep Networks Project for 3D Vision - ETHZ
3D Object Recognition with Deep Networks
This is the 3D Object Recognition with Deep Networks Project for the 3D Vision course at ETHZ
What is needed:
#####Input Data/Voxel/Occupancy Grid
- read 3D CAD Voxel Data transform to Occupancy Grid
-
ModelNet10 - Zip Datei
- in VoxNet paper an adapted version is used
- ModelNet40 - Zip Datei
- .OFF Files can be view in MeshLab
- Matlab Function to read .OFF files (Matlab File)
- Function in the 3D ShapeNet Source code to transform to occupancy grid is called ...
-
ModelNet10 - Zip Datei
- optional 2.5D Reconstruction (combine multiple 2.5D Representation into new 3D Representation)
- optional 2.5D & 3D Point Cloud Data to Voxel Data (Project Tango & Extra Training Data Sets)
#####VoxNet
- VoxNet Source Code Python - Github
- Convolutional Neural Network
- Input Data
- Rotation Augementation & Voting - Increases Performance when applied
- Multiresolution Input (not used on ModelNet)
- 3 Grids - Best Results for Density Grid
- Density Grid
- Binary Grid
- Hit Grid
- Training: Stoastic Gradient Decent with Momentum
- learning rate = 0.0001
- momentum parameter = 0.9
- batch size = 32
- learning rate decrease: 10 per 40000 batches.
- Dropout Regularization after output of each layer
- Initialization:
- Convolutional Layers:
- forward Propagation: zero mean Gaussian with std.dev = sqrt(2/n_l), n_l = dimension(input array(30x30x30) layer l) * input channels layer l
- backward Propagation: zero mean Gaussion with std.dev = sqrt(2/n*_l), n*_l = dimension(input array(30x30x30)layer l) * input channels layer l-1
- Dense Layers: zero-mean Gaussion with std.dev=0.01
- Convolutional Layers:
- augement training data through randomly pertubed(mirrored/shited) copies
- mirror along x and y axis
- shift by -2 to 2 voxels along x and y axis
#####3D Shape Net
-
3D ShapeNet - Source Code Matlab - Zip
- Special Learning Algorithm
- Convolutional Deep Beliefe Network
Steps:
- Get ModelNet 3D CAD Data & Read & Transform if necessary
- Build VoxNet
- Build ShapeNet
- Train, Validate & Tune
- Not Sure if Cross Validation works with the mix of 3D Training and 2.5D Validation Data, but i guess there is someway
- Depending on how much parameters are left to tune and how much GPU power we have this might take a while (it took them 2 days for 40 objects on a cluster)
- Test on Spare Data and try to recreate Experiment from Wu et al. & Maturana et al.
- optional Maybe try to get some Point Cloud Data from Tango Tablet and test Models against that