-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hours to compute on decent gpu, is everything working ok? #33
Comments
Hey there was indeed some kind of performance problem which I think is now fixed. I had the same issue with my 1080. I looked into it and it wasn't running brute-force patch-matching on GPU (T.nnet.conv2d uses a ConvOp node, rather than something cuda). I added a hack to force GPU usage and it now works at the regular slow-ish speed for me. Not sure when that started happening. If you try again, note that you'll need to upgrade to keras v1. |
I'll try in a few hours, thanks! |
I gave sudo pip install neural-image-analogies --upgrade and got: `Requirement already up-to-date: neural-image-analogies in /usr/local/lib/python3.4/dist-packages Requirement already up-to-date: h5py>=2.5.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: six>=1.10.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: Keras>=1.0.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: Pillow>=3.1.1 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: scipy>=0.17.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: PyYAML>=3.11 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: scikit-learn>=0.17.0 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) Requirement already up-to-date: numpy>=1.10.4 in /usr/local/lib/python3.4/dist-packages (from neural-image-analogies) it still seems to be around the same speed as before. 2hrs in for a 800px image and still not done. do you have some benchmarks for the 1080? |
Can you confirm It takes around 10-12 minutes to generate a 512x512 image with |
Metadata-Version: 2.0 /usr/local/bin/make_image_analogy.py I'll do more testing this night, the previous 700px crashed my system (like if it went out of cpu ram, which is a bit strange). I noticed hat the starting resolution of the 3 input images changes the gpu memory usage (not 200% sure tho), maybe it effects also the computational time? this is the command I used for the 7 hour picure: make_image_analogy.py rsz_1132.jpg rsz_1132_p.jpg calor.png out/blue --patch-size=3 --mrf-w 1.5 --model=brute --width 700 |
Looks like I'm having the same out of memory problems with the brute-force matcher. Until that gets improved upon you might need to use the patchmatch model for larger images. Using There's also a fork that generates large images by splitting them into smaller manageable chunks. I haven't had the time to fully review and merge it though. Let me know if you happen to give it a shot. |
I have cuda on a 1070 with cudnn.
I used patchsize of 3 and mode=brute.
to do a 700px square it took 7 hours (and 20 minutes).
100% sure the gpu was being used.
I noticed that the ''static feature computation'' took A LOT and was likely done on cpu (looking at gpu's memory usage). iteration 2x0 took also very long, the others were a lot faster.
is this how it is supposed to be? the result is stunning, so I'm ok with that...just wanna be sure.
for reference:
The text was updated successfully, but these errors were encountered: