ProGamerGov
ProGamerGov
@West102 I can't seem to reproduce the issue with PyTorch 1.7.0, though I was using Python 3.6. I also didn't see anything in the change notes that would cause this...
@Bird-NZ The batch size is always 1 for neural-style-pt, as batch size refers to the number of images being run through the network. You can reduce memory usage by using...
@RoronoaZoroSenpai You're going to have to use a bash or Python script to run `neural_style.py` individually for each file. There also won't be any temporal coherence, so there will be...
@genekogan It's not immediately clear to me what is causing these artifacts, but I'll look into it and see if I can figure it out. Have you tested any of...
The VGG-16 model for the Lua version can be found here: https://gist.github.com/ksimonyan/211839e770f7b538e2d8. A list of many of the supported models for the Lua version can be found here: https://github.com/jcjohnson/neural-style/wiki/Using-Other-Neural-Models The...
Maybe there are layer implementation differences between Torch and PyTorch that could be causing the artifacts? Have you tried using [resize convolutions (that are designed to deal with checker board...
@genekogan Can you still reproduce this issue on the latest version of PyTorch? I was trying to see if my [gradient normalization code](https://github.com/ProGamerGov/neural-style-pt/commit/cbcd023326a3487a2d75270ed1f3b3ddb4b72407) fixed it, but I can't even get...
@genekogan I was testing with the brad_pitt.jpg example image and hokusai:  Tests with and without gradient normalization (some control tests were set to (strength * strength) to make the...
Another thing is that `neural_style.lua` appears to multiply the gradients by the content / style weights in the backward pass instead of dividing by them. So in order to reproduce...
@genekogan These are the different parameters that I used for testing with both Adam and L-BFGS, I think (the tests above are all with L-BFGS, and master branch with no...