Launch from bash script
Is there anyway to call this extension from a bash script without using the Stable Diffusion web ui
It does require the webui running somewhere, but it could be a docker container, vritualbox another server, e.g. on runpod, besides on your own computer. Via the fastapi it is possible to access this external server via the command-line.
Currently, if you start up the webui, you can see how it is done at the API link (bottom page, left). Search for '/tagger/'; there are two post and one get routines. Let's say your server is running on 127.0.0.1, port 7860, then,in the API do the interrogators first. click execute and it will give you the bash commands
curl -X 'GET' \
'http://127.0.0.1:7860/tagger/v1/interrogators' \
-H 'accept: application/json'
The image interrogation can work after you've edited the json: model (select a single one listed above) and image, but it needs to contain a base64 encoded image. E.g. from here. e.g.
curl -X 'POST' \
'http://127.0.0.1:7860/tagger/v1/interrogate' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"image": "...",
"model": "wd14-convnextv2.v1",
"threshold": 0.35
}'
or use
-d '{
"image": "'"$(cat /path/to/image.jpg | base64)"'"
Though sometimes I get a /usr/bin/curl: Argument list too long. The last routine is to unload a model
The image interrogation also lists threshold, but I don't think it does anything yet, server side. And there is a whole lot missing that can be configured in the ui. Server side handling occurs in tagger/api.py, feel free to add some features and send me a pr.
Finally there is a preload.py, I've never tried it out (I hope it still works). This seems to allow to configure preloading for a particular model. when the extension is loaded.
I'll certainly give that a try thank you very much for the description