nagadomi

Results 389 comments of nagadomi

The above test image is Bilevel color with alpha channel. I hadn't thought about this kind of format.

Here is the results of internal benchmark. TL;DR: For photo , `models/photo` is best. For artwork, `models/anime_style_art_rgb` is best. EDIT: For anime(video), `models/photo` is best, I think. ## dataset ```...

I am going to release new `models/anime_style_art`(Y only) model. It might be able to beat the RGB model in 2x.

Yes. `anime_style_art_rgb` is specialized for digital illustration. Anime is made by video recording. `anime_style_art_rgb` sometimes makes weird contour line for Anime(includes video frames and screenshots). source: ![anime](https://cloud.githubusercontent.com/assets/287255/13896863/0d359fa4-ede0-11e5-8129-594385389c31.png) anime_style_art_rgb: ![o1](https://cloud.githubusercontent.com/assets/287255/13896864/1b2374a6-ede0-11e5-84e6-14c5042f0da9.png) photo:...

vgg_7 and upconv_7 have different network architectures. vgg_7: 1. 2x the input image with nearest neighbor upscaler. 2. Repair the image with CNN upconv_7: 1. end-to-end 2x with CNN (it...

The purpose is the same, but the algorithm is different. I tried subpixel convolution before the ESPCN paper was published([github log](https://github.com/nagadomi/waifu2x/blob/v0.12/lib/DepthExpand2x.lua)) but eventually I chose [deconvolution](https://github.com/torch/nn/blob/master/doc/convolution.md#spatialfullconvolution).

PyTorch version can be available at https://github.com/nagadomi/nunif (only convert command is supported). However, CUDA is required. https://github.com/nihui/waifu2x-ncnn-vulkan supports Intel/AMD/NVIDIA.

https://github.com/K4YT3X/video2x https://github.com/HomeOfVapourSynthEvolution/VapourSynth-Waifu2x-caffe I have never used these.

torch's luarocks command first uses torch's rocks repository. Also cudnn on the master branch will not work.

In each epoch, the converted image specified by `-test` is saved to the file like `scale2.0_best.34-4.png`. The image specified by `-test` is used to visual testing for each epoch. Typically...