neural-style-tf icon indicating copy to clipboard operation
neural-style-tf copied to clipboard

Help needed to get video transfer working..

Open glennamarshall opened this issue 6 years ago • 16 comments

Hi Being a noob I'm just about coping with getting this stuff working - I have Windows 10 and managed to get single image transfer working with 'python neural_style.py'. Using bash / ubuntu doesn't activate my GPU.
I really want to get some animation working - but all sorts of errors / problems are stopping me - both using python and bash. I'm asking in advance for help before asking specific questions and detailing problems - as a lot my issues have already been raised here but without response. Thanks.

glennamarshall avatar Aug 02 '18 21:08 glennamarshall

Send me an email and I can try to help. cysmith1010 at gmail.

cysmith avatar Aug 03 '18 09:08 cysmith

Hi! Trying to tackle this issue too, on Win10. If it's not too much trouble, may I contact you with some questions? Thanks in advance. I noticed that deepflow2 only supports Linux, which seems to make video transfer on Windows impossible.

lcb931023 avatar Sep 09 '18 15:09 lcb931023

if you use the colab version you might get it to work.. https://colab.research.google.com/drive/18w-b-ntmA8Y5u7JYLrwPqFAIefj4ZFH5

glennamarshall avatar Sep 09 '18 16:09 glennamarshall

Thanks. The colab works great for images but has an inactivity limitation. Probably to prevent bitcoin mining, but also prevents us from using it for long render tasks unfortunately XD

lcb931023 avatar Sep 09 '18 16:09 lcb931023

i've been using it to batch render animation frames with success. Make sure you keep the browser open - you can get up to 12 hours connection non stop each day.

glennamarshall avatar Sep 09 '18 16:09 glennamarshall

That's impressive! Will give it a try then.

lcb931023 avatar Sep 09 '18 16:09 lcb931023

good luck - i'm about to try it myself .. I know that Cameron is currently working on a better optical flow as well.

glennamarshall avatar Sep 09 '18 16:09 glennamarshall

🍻 Speaking of which, I wonder how the results would look with a low-fidelity optical flow. If there's some kind of app or library that produces optical flow quickly, that runs on Windows, then it'll be real easy for people to try this out. The results may not look as good, but might have potential for artistic expression...

lcb931023 avatar Sep 09 '18 16:09 lcb931023

i know what you mean - i did this video without optical flow all together - but still has 'artistic expression' I think. https://vimeo.com/288168216

glennamarshall avatar Sep 09 '18 17:09 glennamarshall

Great music and choice of input videos!

lcb931023 avatar Sep 09 '18 17:09 lcb931023

thanks!

glennamarshall avatar Sep 09 '18 17:09 glennamarshall

I had some new findings on how to get video to work, so wanna share them here:

The forward/backward flows and weight files (.flo and .txt) depend on c++ statics that only runs on Linux. While flow files can be generated using Windows OpenCV, I couldn't find Windows replacements for the consistency checker. No bueno for Win10.

That being said, the scripts for generating flows and weights can be run separately from the style transfer itself, and they don't require much setup at all - no need for GPU. We could generate them on Google Colab, or on our own Linux machines. Then, take these files back to Windows, and run the transfer.

the neural_style.py file may also need some adjustment depending on what you're trying to do. Here's my version: https://gist.github.com/lcb931023/bf8b7a7988e17dab7888077c55b997dc

lcb931023 avatar Sep 13 '18 09:09 lcb931023

Nice one..

I have a render farm set up to do all the optical flow and styling in Colab. I use multiple google accounts and google cloud storage. I can offer more details on the set up if anyone wants.

glennamarshall avatar Sep 13 '18 16:09 glennamarshall

That's so clever! How did you achieve it?

lcb931023 avatar Sep 15 '18 06:09 lcb931023

For single image setup - I have the call to neural_style.py inside a loop - passing in the frame number.

I have 5 accounts logged in (5 is maximum allowed at same time)

Before each frame is rendered - I write a text file like "FrameRendering_0001.txt" to my google cloud storage bucket. This lets the other renderers know that this is frame is being handled - so it skips and finds the next available frame to render.

For optical flow calculation and proper video rendering - it's not as efficient - as you always need the previous warped frame - so I can't farm render one sequence, but can still do multiple independent sequences at same obviously.

Here's something I just done using all of the above (optical flow & video). https://vimeo.com/290079999

glennamarshall avatar Sep 17 '18 09:09 glennamarshall

Thanks for sharing that. That's a great render.

How long did the 2 minutes video take on cloud?

lcb931023 avatar Sep 17 '18 14:09 lcb931023