waifu2x icon indicating copy to clipboard operation
waifu2x copied to clipboard

Is it possible to create standalone program for that?

Open Veshurik opened this issue 5 years ago • 13 comments

I'm just interested, why no one still not create special program for that. Well, when you don't need any server relations... Just drag the art, and got the upscaled version. Or is it really difficult somehow?..

Veshurik avatar Mar 06 '19 08:03 Veshurik

don't know what you mean of "that" but waifu2x do have some softwares like this: https://github.com/lltcggie/waifu2x-caffe/releases

SekiBetu avatar Mar 06 '19 08:03 SekiBetu

It doesn't work for me, because I have AMD.

And I mean program with interface.

Veshurik avatar Mar 06 '19 08:03 Veshurik

how about this one https://github.com/DeadSix27/waifu2x-converter-cpp/releases it says "This build should support both CUDA for nVidia and OpenCL for AMD, nVidia & Intel HD"

SekiBetu avatar Mar 06 '19 08:03 SekiBetu

Note that waifu2x-converter-cpp supports only old models. Most Deep Learning frameworks only support CUDA (NVIDIA GPU). It is one reason why I provide Web interface.

nagadomi avatar Mar 06 '19 14:03 nagadomi

how about this one https://github.com/DeadSix27/waifu2x-converter-cpp/releases it says "This build should support both CUDA for nVidia and OpenCL for AMD, nVidia & Intel HD"

Yeah, thanks, it works! It is using the same system as for site bigjpg.com? Because I think, those images are better through bigjpg.com...

But of course, it takes a lot of time to process many images... I think, you can waste the same time process them on WEB...

And, by the way, if you drag and drop many files (I dropped 121, for example), then last image won't be process, and black screen shows. 1

Veshurik avatar Mar 09 '19 16:03 Veshurik

how about this one https://github.com/DeadSix27/waifu2x-converter-cpp/releases it says "This build should support both CUDA for nVidia and OpenCL for AMD, nVidia & Intel HD"

Yeah, thanks, it works! It is using the same system as for site bigjpg.com? Because I think, those images are better through bigjpg.com...

But of course, it takes a lot of time to process many images... I think, you can waste the same time process them on WEB...

And, by the way, if you drag and drop many files (I dropped 121, for example), then last image won't be process, and black screen shows. 1

you can open a issue there:https://github.com/DeadSix27/waifu2x-converter-cpp/issues because it's not made by nagadomi

SekiBetu avatar Mar 10 '19 02:03 SekiBetu

Note that waifu2x-converter-cpp supports only old models. Most Deep Learning frameworks only support CUDA (NVIDIA GPU). It is one reason why I provide Web interface.

I'm curious about the GPU you're using for http://waifu2x.udp.jp/ RTX 2080 ? 💃

unit2x avatar Mar 24 '19 14:03 unit2x

@unit2x waifu2x.udp.jp is hosted on EC2 GPU instances(Tesla M60).

nagadomi avatar Mar 24 '19 16:03 nagadomi

you can try waifu2x-ncnn-vulkan, works on almost all GPU https://github.com/nihui/waifu2x-ncnn-vulkan

nihui avatar Apr 14 '19 12:04 nihui

I would like to ask if a GPU is really needed if one just want to convert images using the existing in-repo models(i.e. without training), and in this case what dependencies are required in the runtime?

brlin-tw avatar Apr 22 '19 05:04 brlin-tw

The code of this repo does not support CPU processing. It is unbelievably slow. waifu2x-caffe or other 3rd party implementations support CPU processing.

nagadomi avatar Apr 22 '19 06:04 nagadomi

@nagadomi Yes, CPU processing is much slower, but it's easy to deploy and for infrequent access will be much cheaper to host. For example if you use Google Cloud Run - converting a 2560x1600 image using 1 vCPU will take 250-400 seconds and have on-demand cost of 0.05 cents (0.016 cents CPU/RAM cost and the rest 0.034 cents is data transfer cost) https://cloud.google.com/products/calculator/#id=50da2413-8b35-4c8e-8d3d-72bf71216e07

If are running API on AWS G3 instance (0.75$/h) - to overcome this price you have to process at least ~5000 conversions/hour to achieve the same CPU/RAM cost per conversion.

Of course user-experience matters a lot and fast conversions are essential.

P.S. If you want to deploy CPU version of API - you can use this button: https://github.com/gladkikhartem/waifurun

gladkikhartem avatar Aug 24 '19 13:08 gladkikhartem

the number of requests to waifu2x.udp.jp in the past 3 days is as follows. (counted recaptcha requests).

2019/08/24: 54890
2019/08/23: 59016
2019/08/22: 62710

it's about 2500 req/hour. the server should process requests in an average of 1.5 seconds. also current half of the cost is data transfer cost. it is can not be reduced I think.

nagadomi avatar Aug 25 '19 03:08 nagadomi