Ming-Yu Liu 劉洺堉

Results 16 comments of Ming-Yu Liu 劉洺堉

https://github.com/mingyuliutw/CoGAN/blob/master/README.md ?

The training script belongs to Mitsubishi Electric Research Labs. I did the experiment long time back. I think what you need to do is to 1) Resize the image to...

Speed issue is now fixed in commit [f972e42](https://github.com/NVlabs/MUNIT/commit/f972e4237e2a8615c80950f2b987924256586e5c).

@Cuky88 The degraded performance resulted from migrating to pytorch 0.4 is likely caused by the instance normalization parameter. We accidentally set `track_running_stats=True` in `networks.py`. This means that it will use...

For 2, we use LSGAN as specified in Section 5.1 in the paper. For 3, we use all the training images in the datasets for training.

The difference is between AdaIN and IN is that the affine parameters in AdaIN are data-adaptive (different test data have different affine parameters), while the affine parameters are fixed (fixed...

@MilanKolkata In the commit [972e42](https://github.com/NVlabs/MUNIT/commit/f972e4237e2a8615c80950f2b987924256586e5c), the custom layernorm only supports one image per batch. With the new commit [4c21350](https://github.com/NVlabs/MUNIT/commit/4c21350603d83406a9712f1ea02aa5f564eea0ad), it supports multiple images per batch. However, the time required for...

@HsinYingLee and @MilanKolkata I think the code is now working properly in pytorch 0.4 now. We spent sometime in the past few days playing with this `summer2winter_yosemite256`. We found that...

@akashdexati Please check datasets folder. We now provide two interfaces: one is folder-based and the other is list based. Check out edges2handbags_folder.yaml and edges2handbags_list.yaml for usage examples.

@OValery16 Yes, we find that the domain invariant perceptual loss is useful for large size images. For image resolution of 256x256, we do not use domain invariant perceptual loss.