Haoming
Haoming
Seems to be training now after I comment out [this line](https://github.com/ostris/ai-toolkit/blob/main/jobs/process/BaseSDTrainProcess.py#L1447) at least
I was training a Slider for SDXL Though I only did it once and didn't have good results. Maybe I did something wrong with the configs.
Can't you just set the **Upscale by** to `1.25` in the **Hires. fix** settings, then click the `Upscale` button 3 times? This way, you can also interrupt early if like...
Huh, so clicking the `Upscale` button again still only processes the original `txt2img` result? TIL
Oh yeah. I just tried it as well. Clicking `Upscale` again will produce the same upscaled resolution, but the image become noiser. Seems like the button only takes the original...
> `scribble_color_fixed=True`, `scribble_alpha_fixed=True`, `scribble_softness_fixed=True` imho it is better to let the user change those parameters. Disabling those kinda defeats the point of using `ForgeCanvas` ... > now you never worry...
Forgot to mention, but `full precision` also works correctly ```bash trtexec --onnx=4xNomos8kDAT.onnx --saveEngine=4xNomos8kDAT.trt --shapes=input:1x3x128x128 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw ```
Just tried another model: [2x-ModernSpanimationV1](https://openmodeldb.info/models/2x-ModernSpanimationV1), and adding the `--fp16` flag still works correctly. So probably a certain operator within the **DAT** architecture is causing the `nan`?
> inference code https://github.com/Haoming02/TensorRT-Cpp I've since then tried other Upscaler models, and they worked fine. So it's most likely that `DAT` architecture does not like `fp16` precision...
Since the effect of this Extension isn't perfectly precise to begin with, doing these fancy maths feels rather... redundant. Do you have any examples where these changes improve the outcome...