chainer-fast-neuralstyle-models
chainer-fast-neuralstyle-models copied to clipboard
(Crop)
I love the styles you have! Thank you so much!
I'm trying to make some models of my own, but having trouble with the exact training options. What do you mean by (crop) in your examples? Is that a flag? Are you cropping the original art? Also, your training set, does "full coco" mean the full train2014 w/ 80k images? or a different set?
Thanks.
The training code resizes your style image to square dimensions provided in --image_size
argument. The dataset images have to match the size of your style. Now, since most images in the set are rectangular 640x480, you have two options: simply resize without retaining aspect ratio, distorting objects slightly, or crop to square and then resize to match style size. Here is a little comparison with resulting transformations in the end. The code crops by default and in my fork I implemented an option to keep fullsize images with --fullsize
flag. Hope this helps.
And yes, the full coco means the train2014 dataset with 80k+ images.
Ok, that makes sense...
Would pre-cropping all the training images speed things up at all?
It's taking me 2 days to run a training set on my old graphics card. looking for anyway to make it faster.
Haha, yes I've already tried that. But the speed up is insignificant, certainly not the bottleneck. It takes about 50 ms to resize an image on Core2Duo 3.33 GHz, so pre-cropping would allow to save about an hour per epoch. On quad core systems it would be even less of a difference. If you plan to do a lot of training it would make sense, otherwise don't bother about it.