stable-diffusion-webui-wd14-tagger icon indicating copy to clipboard operation
stable-diffusion-webui-wd14-tagger copied to clipboard

Unloading ML-Danbooru, is it possible without webui reload?

Open coder168 opened this issue 2 years ago • 12 comments

When click the "Unload all interrogate models" button, the following errors occurred:

...
  File "D:\app2\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 25, in unload_interrogators
    if i.unload():
  File "D:\app2\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 136, in unload
    gr.collect()
AttributeError: module 'gradio' has no attribute 'collect'

After a quick check, this error was caused by a code change (in line 136, \tagger\interrogator.py) within an recent pull request merge - "Manually merged: Support ML-Danbooru https://github.com/picobyte/stable-diffusion-webui-wd14-tagger/pull/6, changes amended from CCRcmcpe's".

Please review the relevant code and fix it, currently tagger can not clean its loaded models up. Thanks. BTW, my webui env versions: version: v1.4.1  •  python: 3.10.8  •  torch: 2.0.0+cu118  •  xformers: 0.0.20  •  gradio: 3.32.0

coder168 avatar Jul 18 '23 06:07 coder168

ok, will check. I never touch that button. Thanks for reporting.

picobyte avatar Jul 18 '23 08:07 picobyte

Removing the line will remove the attribute error, and I'll push that as a quick fix, but then the model wont unload. this exact line was one of the open questions I had for @CCRcmcpe, who provided the original pull request and might have a better solution for unloading? The collect() call seems part of the unloading that is commented as dysfunctional, see this comment and properly unloading tensorflow models is not possible but maybe the MLDanbooruInterrogator implementation does not have this flaw?

picobyte avatar Jul 18 '23 10:07 picobyte

Thanks for your review and quick fix.

So with this quick fix 5cbf4f5, the button works for unloading all loaded models except the ML-Danbooru models? Is my understanding correct?

coder168 avatar Jul 18 '23 12:07 coder168

Edit, actually it seems related to tensorflow only, so it's the models DeepDanbooruInterrogator and the experimental large_batch_interrogate.

Any of the others should unload properly. Note that with my implementation you re-read from the db.json, even after a shutdown and reload of stablediffusion webui, including the ui interface. and then the query will read from database. Or is that not enough on windows? It will allow you to retrieve/combine the former interrogation output without the model loading.

Let's add a notice upon unloading all button, or unloading a danbooru, exactly for this purpose: that the unloading of this model requires a reload of webui and reload-ui and the interrogation is retrievable from db without load

picobyte avatar Jul 18 '23 12:07 picobyte

On SO there is a solution mentioned:

from numba import cuda
cuda.select_device(0)
cuda.close()

But then the numba documentation is unclear how to reopen the same device again. And reading this SO someone mentions a .close() is unrecoverable, suggests a .reset(). more numba docs. where a reset() is listed.

I was thinking about

if use_cpu:
    import gc
else:
    from numba import cuda
...

class DeepDanbooruInterrogator(Interrogator):
    ...
    def unload(self) -> bool:
        unloaded = super().unload()

        if unloaded:
            if use_cpu:
                import tensorflow as tf
                tf.keras.backend.clear_session()
                gc.collect()
            else:
                device = cuda.get_current_device()
                device.reset()
        return unloaded

But I run on CPU.

picobyte avatar Jul 18 '23 17:07 picobyte

It is little weird. I run a quick test with the latest version, but VRAM usage does not seem change, only chunk of main memory get released after unloaded. I am using nvidia card.

coder168 avatar Jul 19 '23 04:07 coder168

Hi, you mentioned this https://github.com/toriato/stable-diffusion-webui-wd14-tagger/issues/33 refer to https://github.com/keras-team/keras/issues/2102

In my opinion, it's not able to release GPU memory by tf.keras.backend.clear_session()

I don‘t think numba is a good idea. Because it seems that numba requires Nvidia cuda's dependency

If you really want to release tensorflow memory, I will suggest using multiprocessing.Process to load model and release they by os


https://github.com/WSH032/image-deduplicate-cluster-webui/blob/da8b29cb997a33671477888c6d64c4f83bf0d41b/cluster_images.py#L70

WSH032 avatar Jul 19 '23 08:07 WSH032

Thanks, this is why I placed it behind an experimental option in the settings. The Nvidia dependency for numba did occur to me, but at least for nvidia numba could be an option? AMD (ROCm) and cpu are the others, or do even drivers like nouveau or nv function? I'm also not exactly sure what configs do not release propperly, and trying to get an impression, and whether it is vram or ram.

thanks for the links, I'll do some more digging.

picobyte avatar Jul 19 '23 09:07 picobyte

One thing to note is that some users do not install Nvidia cuda tookit, but use Torch cuda. But Nvidia cuda tooki is required by numba. That’s why I think numba is not a good idea.

In fact, any model related to tensorflow will encounter the problem of not being able to release GPU memory.

The reason why other models, such as WD14, can be released normally is because they are onnx models (check this ), and onnxruntime can release them properly.

gc.collection() is just an explicit garbage collection of python, which cannot release GPU memory.


toriato seems to have tried to release tensorflow memory as well, and I have tried it too, but this seems to be unsolvable. https://github.com/toriato/stable-diffusion-webui-wd14-tagger/issues/31


My solution

  1. Use tensorflow model in a subprocess, as I mentioned above.
  2. Convert tensorflow model to onnx model.

I usually prefer the second option, because onnx model is faster to start and run. The downside is that you can’t use the latest models, unless you convert new models in time after the model author releases a new version.

WSH032 avatar Jul 19 '23 11:07 WSH032

Ah thanks again. I found tf2onnx, which seems to do exactly this.

picobyte avatar Jul 19 '23 12:07 picobyte

You are welcome. I have a Colab notebook about tf2onnx that might be helpful, if you need it. But it is code commented in Chinese, and it needs some adjustments

https://github.com/WSH032/wd-v1-4-tagger-feature-extractor-tutorials/blob/main/wd14_tf2onnx.ipynb

WSH032 avatar Jul 19 '23 14:07 WSH032

One option I am considering is implementing a Settings -> Tagger -> checkbox: convert tensorflow models to onnx. There might be checks necessary for updates, maybe keep the tensorflow next to the onnx model; rerun if the tensorflow's mtime, or sha256 changes.

picobyte avatar Jul 22 '23 15:07 picobyte