Ziwei Liu
Ziwei Liu
The C-Faces dataset has been added to the OCDA benchmark suite: https://drive.google.com/drive/folders/1_uNTF8RdvhS_sqVTnYx17hEOQpefmE2r?usp=sharing . Thanks!
Thanks for the feedback. We will update the attribute prediction model soon.
The input to the landmark detection model should be cropped clothing bounding box region instead of full image. Hope this could help.
Yes, the aforementioned procedure is correct. Since voxel flow is invariant to scaling, it can be readily applied to high-res videos.
This repository only contains the basic network setup for deep voxel flow. Skip connections and multi-scale architecture are not included here. For more updates, please refer to our newly released...
The pre-trained model of "Deep Voxel Flow" is available at: https://drive.google.com/file/d/1FB-mpS4UokiLriDBNJSBmozMQRH0Qez1/view .
Actually regardless of the conditioning inputs (e.g. two frames are used in the paper), the learned voxel flow can be applied to any single frame to synthesize a warped frame,...
It is used to accommodate for the legacy batch norm bug (i.e. running mean and variance) in the early TensorFlow version. But the network parameters are not updated during the...
Yes, the data in HDF5 file is stored in the BGR order channel-wise for accommodating with the Caffe convention. The value of the images has been normalized as: data =...
@richliao Yes, the inputs to our Deep Fashion Alignment (DFA) are clothes bounding boxes. We treat this detection and cropping procedure as pre-processing and don't include it in this codebase....