Deep-Image-Analogy icon indicating copy to clipboard operation
Deep-Image-Analogy copied to clipboard

Any methods to increase the generation speed?

Open xiaojieli0903 opened this issue 7 years ago • 9 comments

When I run the demo on my GPU to generate the style image using the parameter 0 0.5 2 0, it cost totally 54s. And if I increase the 'Ratio' to 1, it will cost even 250s. Although the results are fantastic, the generation cost too much time.

So I want to ask that is there any method to increase the generation speed? I have change the VGG-19 to VGG-16 or ResNet-50. But their generation results are not pleased and the time can not decrease too much.

Any further guidance or info would be much appreciated.

xiaojieli0903 avatar Jun 09 '17 08:06 xiaojieli0903

  1. You can use a decoder instead of LBFGS algorithm to deconvolve the feature maps. That would reduce much time.
  2. If you do not care about the quality of results, you can generate only one direction's result . It will speed up if you get rid of one of the direction, such as AB or A'B'. Near 50 percent of the time cost can be saved.

rozentill avatar Jun 10 '17 05:06 rozentill

Thank you very much for your recommendation. I'll try these two methods to see if they can speed up.

xiaojieli0903 avatar Jun 19 '17 02:06 xiaojieli0903

 I wan to know more about the speed corresponding with the resolution? Can you show us more results?

zencyyoung avatar Aug 02 '17 14:08 zencyyoung

  1. Can you please kindly explain what kind of decoder to use, is it something like 'the pre-trained fast neural style' network?
  2. Do you think if using the propagate-assist kd-tree to replace the patchmatch can improve the speed?

gxlcliqi avatar Oct 31 '17 14:10 gxlcliqi

@gxlcliqi Hi, you can train a decoder to make sure the feature maps is equivalent to those in the encoder. There is a reference : https://arxiv.org/abs/1705.08086 .

Yes, I think kd-tree can speed up patchmatch.

rozentill avatar Nov 07 '17 16:11 rozentill

@rozentill Thank you very much for the information, I will try it.

gxlcliqi avatar Nov 08 '17 10:11 gxlcliqi

@rozentill Hi, I don't understand why there must be two directions, I mean if there is only one direction how the result will be impacted? Thanks a lot.

gaoyangyiqiao avatar Nov 18 '17 13:11 gaoyangyiqiao

@gaoyangyiqiao Hi, the one direction also works. In both arXiv and SIGGRAPH versions of our paper, there are comparisons between one direction and two direction, the results using two direction would be better since the matching becomes more accurate.

rozentill avatar Nov 18 '17 20:11 rozentill

@rozentill Thanks a lot for answering. May I ask one more question, is there a python version to implement this paper?

gaoyangyiqiao avatar Nov 22 '17 10:11 gaoyangyiqiao