Yijun Li
Yijun Li
@taesiri Yes, the Tensorflow implementation (by Evan) did some code optimizations to reduce the memory usage. Check the second paragraph in Evan's Readme: "_As in the original paper, reconstruction decoders...
@longxiong2016 I think it is their training setting. You need to try the image in their paper and see if you can get similar results.
@994088895 Thanks for your interests about our work. I use CUDA 7.0 and cuDNN v2 at that time.
@visonpon Thanks for your interests about our work. Yes, the setting of our model requires two input images (target/guidance). If you have RGB only, you may run some depth prediction...
@okdewit You're right. I do not include this in my code. As long as you have the mask, you can decide to only stylize some parts and keep the rest...
I do not have tools for that. Basically, I just save the weight/bias in each layer as a `.t7` file, which can be loaded through `load_lua` and then copied into...
@Sugarbank I think so. Currently, our extension to video is not good and the SVD decomposition looks more unstable for consecutive frames than changing the mean in AdaIN.
Not try that yet. You may ask the authors. See if you can start the training with the pretrained models first.
@Sugarbank Thanks for your interests about our work. Yes, we provide the option for running in CPU mode: ``` th test_wct.lua -gpu -1 ```
@jjmontesl Hi, do you mean that you cannot load my model trained under GPU? Did you install cudnn?