Bo Yang
Bo Yang
hi @Hans1984 , there's no need to modify the code for rendering images and voxelization, but you need to slightly change a few lines to load the ModelNet objects which...
@bsaund Great thanks for reporting the mismatch. The learning rate in the release code is correct. We will update it in next paper version.
@ming-dream see #5 . The processing steps and code are there.
@ming-dream One depth image is processed to be only one "npz" file. The other "npz" file is the corresponding 3D groundtruth which is generated thru voxelization algorithm or the "binvox"...
@kunalkhadilkar plz save the results as npz files (scipy.io.savemat() -> np.savez() ), use the function "visualize()" from here https://github.com/Yang7879/3D-RecGAN-extended/blob/master/demo_3D-RecGAN%2B%2B.py
@guanrenxue Thanks for your interests. All data and the trained model are also avaliable at Baidu Pan: https://pan.baidu.com/s/1FQXo_XQX4flDrE_jwElCCw 提取码: cam7
@yzp12 The 3D shape is generated by fusing a long sequence (thousands) of depth images from Kinect V2. A number of depth images are randomly selected from the long sequence...
@yzp12 A pair of the partial 2.5D view and the ground truth 3D can be voxelized and aligned as you described, but in all of our experiments, both the partial...
@yzp12 Yes, we segmented the object manually, but there are existing algorithms for the task.
@PranjaLBiswas27 We use meshlab. It's pretty easy to remove floors or backgrounds.