QualityScaler
QualityScaler copied to clipboard
How to load custom onnx fp16 models (for example real drct gan)? That would be great improvement!
https://mega.nz/file/0gJwyIBA#fTdbXWb6zbWrQApg2VgNRbY_fh3wdy5f-mP4Oz1jVbU
Please add support this model for Super resolution task, cause it is SOTA.
Hi my friend,
only the onnx file is not Isisnot enough to implement the model.
I would need the github project to better understand how to implement it
With this script it is working fine. https://pastebin.com/DUAKpuF1 However if I rename model to you naming (example Bsrgan fp16) and changed parameters to float16, mode, your output tiling logic is not properly with this model. Output is blured slightly.
Output is blured?
Not blury but not detailed output
as it should be.
Could you add support for Real DRCT?
Hi my friend,
I was trying to replicate the project to convert it to onnx.
Where did you find the onnx file you posted? because in the project github I can't find it
Author removed finetuned model from google drive. I opened issue here: https://github.com/ming053l/DRCT/issues/28 I have converted it to fp16 onnx cause it performed much faster than pth in chainner. Author later removed finetuned model as I said. Would be realy nice to add support for it.
Hi, @Djdefrag Any news about support DRCT? Thanks
Hi my friend,
i tried to replicate the DRCT torch model and convert it to onnx but without success.
In any case, if you already have the onnx model you can make compatible with QualityScaler.
Essentially if you have the onnx model in fp32 mode you are already well on your way. But there needs to be an additional step because QualityScaler is designed to have onnx fp16 models but with fp32 input.
To do this you can use the following code:
from onnxconverter_common import float16
model_fp32_path = f"{selected_AI_model}_fp32.onnx”
model_fp16_path = f"{selected_AI_model}_fp16.onnx”
loaded_model_fp32 = onnx.load(model_fp32_path)
model_fp16 = float16.convert_float_to_float16(model = loaded_model_fp32, keep_io_types=True, max_finite_val=1e5)
onnx.save(model_fp16, model_fp16_path)
where selected_AI_model = "-DCRT-something"
Hi, author od DRCT released updated real gan models please take a look and integrate in Quality scaler https://github.com/ming053l/DRCT
Hi my friend,
I still can't find the .onnx model, did you find it? i can't find any links
just convert it by chainner https://mega.nz/file/MkQgCL6T#R0A7rawCgEC1dAD8Qsb1kMZRZZeFj0oDUMZZOoHpN3U
?
you converted the .pth model with Chainner?
Hi my friend,
i tested this AI and theese are the results for an image 256x256.
As you can see RealDRCT is x10 times slower than BSRGANx4 (the slowest model actually used)
And also it uses x2 times more GPU VRAM (BSRGAN 1.5GB / RealDRCT 3GB )
It seems also that DirectML is not fully compatible with this AI architecture