Waifu2x icon indicating copy to clipboard operation
Waifu2x copied to clipboard

Script to scale is missing?

Open seadra opened this issue 5 years ago • 5 comments

Is this repo for training only? (train.py)

It doesn't look like there is a script which would do the actual image scaling.

seadra avatar Mar 21 '19 19:03 seadra

My codes may seem messy >.<

For the most recent model training, I use sqlite to store pre-processed images. The scaling step is defined at https://github.com/yu45020/Waifu2x/blob/3b8713d571a9a704207f8833c07a5533c0c0279b/Dataloader.py#L87

    def get_img_patches(self, img_file):
        img_pil = Image.open(img_file).convert("RGB")
        img_patch = self.random_cropper(img_pil)
        lr_hr_patches = self.img_augmenter.process(img_patch)
return lr_hr_patches

yu45020 avatar Mar 21 '19 22:03 yu45020

Oh, no, what I meant is, for end users and distro packagers, it would be really nice if you could provide the trained models + a waifu2x like command which accepts an input image filename and the output file name (and maybe additional options such as the model and scaling factor).

Then we could just clone your repo, and run something like

pywaifu2x -i input.png -o output.png -s 2 -m dcscn

seadra avatar Mar 21 '19 22:03 seadra

That seems a nice idea, and it will be a good chance for me to learn something new in programming.

By the way, this repo was created as a practice to learn PyTorch, and I didn't expected to create something come close to the original waifu2x's repo.

yu45020 avatar Mar 21 '19 23:03 yu45020

I'll be looking forward to it!

One more added benefit is, as far as I know, there is currently no way of using these new models under Linux or macOS without an NVidia GPU.

(waifu2x-caffe does work on CPU, but it's Windows only, waifu2x-converter-cpp works on CPU but doesn't have the new models)

seadra avatar Mar 22 '19 00:03 seadra

The new model is trained in fp16, but PyTorch doesn't support fp32 in CPU. You may still use it in CPU only for inference by changing this line in Nvidia apex at here

class tofp16(nn.Module):
    """
    Utility module that implements::
        def forward(self, input):
            return input.half()
    """

    def __init__(self):
        super(tofp16, self).__init__()

    def forward(self, input):
return input.half() 

to

def forward(self, input):
return input.float()

I once did some comparison on various out of sample test, using fp16 checkpoints to do fp32 inference makes almost no statistical difference. I will add more details in the ReamdMe page and the new checkpoints this weekend.

yu45020 avatar Mar 22 '19 00:03 yu45020