image-analogies
image-analogies copied to clipboard
hours to compute on decent gpu, is everything working ok?
I have cuda on a 1070 with cudnn.
I used patchsize of 3 and mode=brute.
to do a 700px square it took 7 hours (and 20 minutes).
100% sure the gpu was being used.
I noticed that the ''static feature computation'' took A LOT and was likely done on cpu (looking at gpu's memory usage). iteration 2x0 took also very long, the others were a lot faster.
is this how it is supposed to be? the result is stunning, so I'm ok with that...just wanna be sure.
for reference:
Hey there was indeed some kind of performance problem which I think is now fixed. I had the same issue with my 1080. I looked into it and it wasn't running brute-force patch-matching on GPU (T.nnet.conv2d uses a ConvOp node, rather than something cuda). I added a hack to force GPU usage and it now works at the regular slow-ish speed for me. Not sure when that started happening.
If you try again, note that you'll need to upgrade to keras v1.
I'll try in a few hours, thanks!
I gave sudo pip install neural-image-analogies --upgrade
and got:
`Requirement already up-to-date: neural-image-analogies in /usr/local/lib/python3.4/dist-packages Requirement already up-to-date: Theano>=0.8.2 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: h5py>=2.5.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: six>=1.10.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: Keras>=1.0.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: Pillow>=3.1.1 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: scipy>=0.17.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: PyYAML>=3.11 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: scikit-learn>=0.17.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies)
Requirement already up-to-date: numpy>=1.10.4 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) `
it still seems to be around the same speed as before. 2hrs in for a 800px image and still not done.
do you have some benchmarks for the 1080?
Can you confirm pip show neural-image-analogies
displays version 0.1.2? Does the output from which make_image_analogy.py
make sense?
It takes around 10-12 minutes to generate a 512x512 image with --brute
on my machine. I'll try a larger size when I get a chance. I haven't yet seen the limits of what the 1080 can handle but 800px^2 would've been out of reach memory-wise with my last GPU: a GTX 780.
Metadata-Version: 2.0 Name: neural-image-analogies Version: 0.1.2 Summary: Generate image analogies with a deep neural network. Home-page: https://github.com/awentzonline/image-analogies/ Author: Adam Wentz Author-email: [email protected] Installer: pip License: UNKNOWN Location: /usr/local/lib/python3.4/dist-packages Requires: Theano, h5py, six, scikit-learn, scipy, Pillow, numpy, PyYAML, Keras Classifiers:
/usr/local/bin/make_image_analogy.py
I'll do more testing this night, the previous 700px crashed my system (like if it went out of cpu ram, which is a bit strange). I noticed hat the starting resolution of the 3 input images changes the gpu memory usage (not 200% sure tho), maybe it effects also the computational time?
this is the command I used for the 7 hour picure: make_image_analogy.py rsz_1132.jpg rsz_1132_p.jpg calor.png out/blue --patch-size=3 --mrf-w 1.5 --model=brute --width 700 I've done a 512px one now and it took around 16 minutes, which on a 1070 is reasonable compared to yours 10+ minutes. it's strange that the time is not even close to be quadratic tho, if this keeps happening I'll try a fresh install.
Looks like I'm having the same out of memory problems with the brute-force matcher. Until that gets improved upon you might need to use the patchmatch model for larger images. Using --model=patchmatch --width=800
each full-sized iteration takes ~920 seconds on my 1080 GTX with an i5 2500k cpu. Since the matching is done on cpu you'll probably get better results if you have a newer cpu.
There's also a fork that generates large images by splitting them into smaller manageable chunks. I haven't had the time to fully review and merge it though. Let me know if you happen to give it a shot.