Wenbin Lin
Wenbin Lin
Thanks for your interest. We first project the position of the 3D node into the 2D image space and read the depth value of the projected pixel. If the depth...
We take a 7x7 window to calculate the average motion, and the projected pixel is the center of the window.
For finding the vertex pair set P, we do not use the above averaging method.
OcclusionFusion is able to reconstruct multiple person. We show a sequence of two pillows colliding in our video and we use only one model. I think it is similar for...
Sorry we have been busy with other projects lately, we will clean up and release the code for data preprocessing, network training and evaluation as soon as we have time,...
The graph-based ARAP deformation is implemented as `E_reg` in our paper. The energy term for post-processing is a combination of `E_motion` and `E_reg`, and we optimize the rigid rotation and...
Hi, thanks for your interest! We only made a few changes to the RAFT open source implementation to adapt the RGB-D input, and no additional modifications were made for the...
We do not have plan to release the code for RGB-D based RAFT training for now, it's actually quite simple to implement. As you mentioned, we just add the inverse...
We train the model successively in the order of FlyingThings3D -> Monkaa -> Sintel for 100k iterations each. If there is any confusion about it, please feel free to let...
We follow the RAFT implementation and just freeze the batchnorm after training on FlyingThings3D. And we use the full the FlyingThings3D dataset instead of the smaller subset.