AdaIN-style icon indicating copy to clipboard operation
AdaIN-style copied to clipboard

[Observation] Running same image through AdaIN

Open ArturoDeza opened this issue 7 years ago • 16 comments

I'm tweaking the code to do somewhat of a trivial example: Essentially running any image (in this case a scene, and not a texture), through the AdaIN, and having the style and content be the same image, with the goal of getting an exact reconstruction of the input image (as other style transfer methods can do). However, AdaIN seems to still texturize the output in this procedure, and the output looks a lot like an acuarela-like version of the same image, with most of the fine detail lost. I will try to post an example of this soon.

What is a good suggestion to work around this? I know the training algorithm is not out, but perhaps training the network with non-textures would improve such performance, or just more rounds of training? Thoughts?

ArturoDeza avatar Apr 11 '17 04:04 ArturoDeza

content: image style: image

I think the AdaIN need a lot of space for improvement,If it breakthrough,that will amazing

dovanchan avatar Apr 11 '17 05:04 dovanchan

@ArturoDeza Could you post an example of input image used? Adaptive Instance Normalization should not cause any change. If you substitute x=y in equation (8) here the output you get AdaIN(x,y)=x. The only other reason for change can be that the decoder is not the exact inverse of encoder. This can be solved by training till Lc (refer Figure 2 of paper) is very low.

I tried to use cornell.jpg as both my input and style image. The output I got seems reasonable (I used models/decoder-content-similar.t7):

gsssrao avatar Apr 11 '17 10:04 gsssrao

I think the result similar as fast-neural-style

dovanchan avatar Apr 11 '17 10:04 dovanchan

this is what I get when I run th test.lua -content 3.png -style 3.png with the default values. Can someone please replicate this to make sure they get the same results? @gsssrao @dovanchan

3 3_stylized_3

My output when running th test.lua -style input/content/cornell.jpg -content input/content/cornell.jpg cornell_stylized_cornell Looks a bit off, slightly less quality than yours @gsssrao

ArturoDeza avatar Apr 11 '17 16:04 ArturoDeza

@ArturoDeza I too get similar results. It seems you are using the default decoder. Try the newer one. It has slightly better decoder weights.

th test.lua -style input/content/cornell.jpg -content input/content/cornell.jpg -decoder models/decoder-content-similar.t7

With this you would get same results as mine. For the image you provided these are the results I get for the two decoders:

As I had mentioned earlier, I think that the trained decoder is not the exact inverse of encoder. This can be solved by training till Lc (refer Figure 2 of paper) is very low i.e have more number of iterations.

gsssrao avatar Apr 11 '17 20:04 gsssrao

Makes sense! Thanks, it gave me better results! Hoping the training code comes out soon.

ArturoDeza avatar Apr 13 '17 22:04 ArturoDeza

@ArturoDeza How about the result using my content and style image?Can you show to me?

dovanchan avatar Apr 14 '17 00:04 dovanchan

There is a port in Tensorflow on which i am working currently and we have an issuem the pictures all appear to be darker and the color is a bit off. paper

I would really appreciate some help take a look at the code here: https://github.com/jonrei/tf-AdaIN

hristorv avatar Apr 14 '17 06:04 hristorv

@dovanchan , Hmm, I still seem to get the same tile-like artifacts you are getting. I think this can be avoided with the training procedure that they use in the Diverse Synthesis paper. Attaching my output. *On the flipside -- notice that the style image you are inputting also has this heavy brush-like painting style which I think the AdaIN is capturing cat1_stylized_style1

ArturoDeza avatar Apr 14 '17 19:04 ArturoDeza

@hristorv I'm facing the same issue while working with lasagne. Almost all pictures seem to have a blue colour. Did you get a solution to that?

LalitPradhan avatar May 23 '17 07:05 LalitPradhan

@LalitPradhan I have solved the problem. The color values of the images are represented from 0 to 1. However they should be from 0 to 255 after postprocess. What we need to do is preprocess the images, dividing by 255. And also postprocess, multiplying by 255 and clipping the values. I am getting the same results as in the original paper and code.

P.S. Check https://github.com/jonrei/tf-AdaIN , there is a discussion about this issue.

hristorv avatar May 23 '17 08:05 hristorv

@hristorv Can you use my content image and style image to have a test?(The cat photo I used before)

dovanchan avatar May 23 '17 08:05 dovanchan

@hristorv Thanks. Your solution worked.

LalitPradhan avatar May 23 '17 15:05 LalitPradhan

@ArturoDeza did you fix the noise problem yet, would you mind share your solution with lua ?

MonaTanggg avatar Jul 10 '17 10:07 MonaTanggg

@MonaTanggg See this thread: https://github.com/xunhuang1995/AdaIN-style/issues/16

Essentially I trained a pix2pix Super-Resolution module that maps back to the original image. It does quite a good job removing the artifacts.

ArturoDeza avatar Jul 31 '17 14:07 ArturoDeza

Hello. I have been implementing this in tensorflow with reference to this . I am getting results with VERY low contrast, as in the pictures are extremely dull. Implementing the fast style transfer using a generator network gave much better results.

Can someone tell me how I can correct this? Am i doing something wrong? I am also ensuring that the output image is converted to the 0 to 255 range.

Thanks!

akhauriyash avatar Dec 26 '17 23:12 akhauriyash