QualityScaler icon indicating copy to clipboard operation
QualityScaler copied to clipboard

How to load custom onnx fp16 models (for example real drct gan)? That would be great improvement!

Open zelenooki87 opened this issue 1 year ago • 14 comments

https://mega.nz/file/0gJwyIBA#fTdbXWb6zbWrQApg2VgNRbY_fh3wdy5f-mP4Oz1jVbU

Please add support this model for Super resolution task, cause it is SOTA.

zelenooki87 avatar Nov 10 '24 11:11 zelenooki87

Hi my friend,

only the onnx file is not Isisnot enough to implement the model.

I would need the github project to better understand how to implement it

Djdefrag avatar Nov 10 '24 12:11 Djdefrag

With this script it is working fine. https://pastebin.com/DUAKpuF1 However if I rename model to you naming (example Bsrgan fp16) and changed parameters to float16, mode, your output tiling logic is not properly with this model. Output is blured slightly.

zelenooki87 avatar Nov 10 '24 12:11 zelenooki87

Output is blured?

Djdefrag avatar Nov 10 '24 13:11 Djdefrag

Not blury but not detailed output 1 Qualityscaler-DRCT 2 DRCT-myscript as it should be.

zelenooki87 avatar Nov 10 '24 13:11 zelenooki87

Could you add support for Real DRCT?

zelenooki87 avatar Nov 12 '24 10:11 zelenooki87

Hi my friend,

I was trying to replicate the project to convert it to onnx.

Where did you find the onnx file you posted? because in the project github I can't find it

Djdefrag avatar Nov 12 '24 13:11 Djdefrag

Author removed finetuned model from google drive. I opened issue here: https://github.com/ming053l/DRCT/issues/28 I have converted it to fp16 onnx cause it performed much faster than pth in chainner. Author later removed finetuned model as I said. Would be realy nice to add support for it.

zelenooki87 avatar Nov 12 '24 20:11 zelenooki87

Hi, @Djdefrag Any news about support DRCT? Thanks

zelenooki87 avatar Nov 22 '24 10:11 zelenooki87

Hi my friend,

i tried to replicate the DRCT torch model and convert it to onnx but without success.

In any case, if you already have the onnx model you can make compatible with QualityScaler.

Essentially if you have the onnx model in fp32 mode you are already well on your way. But there needs to be an additional step because QualityScaler is designed to have onnx fp16 models but with fp32 input.

To do this you can use the following code:

from onnxconverter_common import float16
model_fp32_path = f"{selected_AI_model}_fp32.onnx”
model_fp16_path = f"{selected_AI_model}_fp16.onnx”
    
loaded_model_fp32 = onnx.load(model_fp32_path)
model_fp16 = float16.convert_float_to_float16(model = loaded_model_fp32, keep_io_types=True, max_finite_val=1e5)
onnx.save(model_fp16, model_fp16_path)

where selected_AI_model = "-DCRT-something"

Djdefrag avatar Nov 22 '24 11:11 Djdefrag

Hi, author od DRCT released updated real gan models please take a look and integrate in Quality scaler https://github.com/ming053l/DRCT

zelenooki87 avatar Dec 05 '24 16:12 zelenooki87

Hi my friend,

I still can't find the .onnx model, did you find it? i can't find any links

Djdefrag avatar Dec 05 '24 17:12 Djdefrag

just convert it by chainner https://mega.nz/file/MkQgCL6T#R0A7rawCgEC1dAD8Qsb1kMZRZZeFj0oDUMZZOoHpN3U

zelenooki87 avatar Dec 05 '24 19:12 zelenooki87

?

you converted the .pth model with Chainner?

Djdefrag avatar Dec 06 '24 06:12 Djdefrag

Hi my friend,

i tested this AI and theese are the results for an image 256x256.

As you can see RealDRCT is x10 times slower than BSRGANx4 (the slowest model actually used)

And also it uses x2 times more GPU VRAM (BSRGAN 1.5GB / RealDRCT 3GB )

immagine

It seems also that DirectML is not fully compatible with this AI architecture

immagine

Djdefrag avatar Dec 06 '24 08:12 Djdefrag