Ming-Yu Liu 劉洺堉
Ming-Yu Liu 劉洺堉
@wanghxcis Could you send me some example images and training results?
Thanks, @kingsj0405 ! @arunmallya Could you address the issue?
@xukaiquan Stitching the head back is a challenging problem by itself. As we animate the face, the neck position might change, directly pasting the face back then results in stitching...
For the train and test splits, please check out https://github.com/NVlabs/FUNIT/blob/master/datasets/animals_train_class_names.txt https://github.com/NVlabs/FUNIT/blob/master/datasets/animals_list_test.txt https://github.com/NVlabs/FUNIT/blob/master/datasets/animals_list_train.txt https://github.com/NVlabs/FUNIT/blob/master/datasets/animals_list_test.txt On Mon, Aug 17, 2020 at 9:59 AM JonathanBrok wrote: > Although the link you gave works,...
Please check out our imaginaire repo. We have a link to download the pre-processed animal face dataset there. Just need to follow the README for FUNIT there. https://github.com/NVlabs/imaginaire
FYI We have a cleaner and better implementation of FUNIT in https://github.com/NVlabs/imaginaire Due to our limited resources, we will likely support Imaginaire better in the future.
We are still go through the code for a final check. The complete code will be likely release in June.
In the last page of the FUNIT arxiv paper (which will be presented in ICCV 2019), we do have the few-shot face translation experiments. We simply use CelebA for the...
Each human identity is a class. CelebA has the person name for each photo. You can divide the training set into different classes using the name.