diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Allow safety checker pipeline configuration that returns boolean array but does not black out images

Open theahura opened this issue 3 years ago • 30 comments

Is your feature request related to a problem? Please describe. See this PR comment: https://github.com/huggingface/diffusers/pull/815#discussion_r994418216

TLDR: with recent changes in #815, developers have the ability to disable the safety checker. Currently, the only options available to devs is to either have the safety checker or not have it at all. While this is useful, many applications of NSFW content require opt in access from end users. For example, consider the Reddit NSFW model -- the end user is shown a 'nsfw' overlay that they have to manually click through. Currently, the diffusers library does not make it easy to support such a use case.

Describe the solution you'd like I think the best approach is to add a flag to the SafetyChecker class called black_out_images. This flag would then modify the if statement on this line: https://github.com/huggingface/diffusers/blob/797b290ed09a84091a4c23884b7c104f8e94b128/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L74

        for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
            if has_nsfw_concept and black_out_images:
                images[idx] = np.zeros(images[idx].shape)  # black image

The flag would then be passed into the SafetyChecker from the top level pipeline config.

Describe alternatives you've considered Another alternative is to do this at the pipeline level. For example, we could pass in a flag to the Pipeline class called black_out_nsfw_images. This flag would then modify the safety_checker call here: https://github.com/huggingface/diffusers/blob/797b290ed09a84091a4c23884b7c104f8e94b128/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L335

        safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
        cleaned_image, has_nsfw_concept = self.safety_checker(
            images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
        )

		if black_out_nsfw_images:
			image = cleaned_image

Additional context In both cases, I believe the config can default to 'nsfw images will be blacked out'. Having the option is critical, however.

theahura avatar Oct 14 '22 16:10 theahura

Hey @theahura,

Could you maybe try to add a community pipeline for this? See: https://github.com/huggingface/diffusers/issues/841

patrickvonplaten avatar Oct 14 '22 19:10 patrickvonplaten

Wouldn't it be far more beneficial to return Gaussian blurred images, like in the Discord beta? If the image is black, you don't know if this is something to refine in your prompt, or a prompt to scrap.

The way I have edited the safety checker is to simply return the image, regardless. Now it's up to the developer to make use of the has_nsfw_concepts and decide what to do with it. In my case, I have chosen the Gaussian blur, so the user can see if:

  • A) The flag was in error
  • B) The flag was prudish (maybe adjust the prompt)
  • C) That's clearly a pornographic/violent image and my prompt is clearly comprehended as such

It actually doesn't make much sense to return a has_nsfw_concepts type deal if you're just returning a black image.... clearly, they'll get it. Lol

WASasquatch avatar Oct 15 '22 03:10 WASasquatch

@patrickvonplaten I strongly think this should be a maintained pipeline. I think it is an incredibly common and practical use case, and many people will end up with the same question.

If the maintainers don't budge on this, I'll inevitably build it in the community pipeline and try and push it up to main, but obviously that would be on a different timeframe

theahura avatar Oct 15 '22 04:10 theahura

Hey @theahura,

We simply cannot maintain all the possible different use cases. Note that pipelines should be seen more as examples/higest-level API" . For cases such as yours, it would be very simple to just do the following:

# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline

# 1. load full pipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_type=torch.float16, revision="fp16")

safety_checker = pipe.safety_checker
feature_extractor = pipe.feature_extractor

# 2. Now disable the safety checker
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_type=torch.float16, revision="fp16", safety_checker = None)

pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  

# 3. Now use `safety_checker` and `feature_extractor` in whatever way you like #170 

Note that especially for topics regarding the safety checker, there are many different ideas/opinions out there and we cannot try to find a solution that works for everyone. In this case, I don't think we should change anything - it's very simply to adapt the safety checker to your needs as shown in the code snippet above.

patrickvonplaten avatar Oct 16 '22 23:10 patrickvonplaten

@patrickvonplaten

Note that especially for topics regarding the safety checker, there are many different ideas/opinions out there and we cannot try to find a solution that works for everyone. In this case, I don't think we should change anything - it's very simply to adapt the safety checker to your needs as shown in the code snippet above.

I think it should at least make sense... Right now, it returns a boolean for if an image is NSFW, and then also returns a black image. So from even a highest-level API viewers standpoint, the implementation just doesn't make sense. If you were going to have a boolean for a developer to implement something (a message, blurring the image, etc), you'd have something.

It's not usable in any API sense, high, low, or whatever. It just doesn't make sense. You even advertise the use of the safety_checker.py in this repos documentation, but it has no use but brute force censoring. No API aspect to it.

Is this really more about liabilities, shoving it off to "community" pipelines?


Considering like, everyone, ever, that isn't a service, just straight disabled this with a dummy function or None it does seem to be clearly not so much about what's right for the usage, but something ulterior. If someone wants to censor, they can, and they should be able to use the safety checker as it seems inherently implied to be used as, a warning to the devloper, to then handle as they please.


If you do keep it as is, please think about renaming it, so it makes sense. Like "safety_censor". It ain't checking Jack (form a usage and implementation standpoint)

WASasquatch avatar Oct 17 '22 01:10 WASasquatch

@patrickvonplaten thanks for the reply. I think that there's a lot of sensitivity around this particular avenue of feature requests, so I understand the hesitance to take on more maintenance here.

That said, though I disagree with the tone of WASasquatch, I think overall they make a good point -- there aren't many DEVELOPER use cases that play well with having a forced black out. I think this feature falls pretty squarely in line with the overall goals of this library, namely, to safely and easily enable many people to utilize SD. By having this feature be managed, many more developers can safely include SD as a black box without having to know anything about features, extraction, or even that there are multiple models involved.

I wouldn't ask the maintainers to consider frivolous requests, and I agree that doing everything is out of scope. But I do think that adding this one additional feature would be hugely beneficial and solve the 99% use case. In other words, I don't think this is just 'my needs', so much as the needs of the broader community.

(NB I don't think having this be a separate pipeline is a great idea, because that DOES increase maintenance load significantly. Instead, the default SD pipeline parameter for the safety checker should just take in one of three options -- default, None, and 'NO_BLACKOUT' or something equivalent.)

theahura avatar Oct 17 '22 04:10 theahura

That said, though I disagree with the tone of WASasquatch, I think overall they make a good point -- there aren't many DEVELOPER use cases that play well with having a forced black out. I think this feature falls pretty squarely in line with the overall goals of this library, namely, to safely and easily enable many people to utilize SD. By having this feature be managed, many more developers can safely include SD as a black box without having to know anything about features, extraction, or even that there are multiple models involved.

I don't think this library is to safely and easily enable many people to use SD. It's a API to allow developers to build services, which then let many people utilize SD. In no way is installing python, dependencies, and copying and pasting/customizing a script "many people" (or easy) and inherently a miniscule scope of them by the user bases of services like mine and others.

To that end, an arbitrary safety "checker" that doesn't do as it describes, or even the functionality it's programmed to do correctly, is beneficial to an API.

Having an option for black out/ heavy Gaussian was my initial intended idea (and I have Gaussian implemented so a user can actually screen what's going on in safe non-descriptive/explicit manner as I described above).

And lets be honest, the safety checker is weird. It censors random stuff and you don't know why. Was it actually explicit (did it just throw in a random nude person or something? Happens) You'd likely be able to tell through a non-descriptive Gaussian blur, and then make use of negative prompts.

WASasquatch avatar Oct 17 '22 06:10 WASasquatch

Ok, I think we're intertwining multiple feature requests here:

  • 1.) Allow an option to not black out an image, but return a boolean whether it's safe or not. As said above, personally I don't see the huge use case for this + it's weakening the safety of the model even more. cc'ing @patil-suraj @pcuenca @anton-l @natolambert here
  • 2.) By @WASasquatch, to blurr the image instead of showing a black image as another option.

From a practical point of view, both use cases could be enabled by adding a config parameter to the __init__ here: https://github.com/huggingface/diffusers/blob/2b7d4a5c218ed1baf12bca6f16baec8753c87557/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L24 that is called "nfsw_filter=black_out/blurred/None"

Fine for me to open a PR for this even though I won't have the time to do so. But, first I want to check with @natolambert @meg-huggingface - what you think?

patrickvonplaten avatar Oct 17 '22 09:10 patrickvonplaten

Also @WASasquatch, what do you mean by:

Is this really more about liabilities, shoving it off to "community" pipelines?

?

patrickvonplaten avatar Oct 17 '22 09:10 patrickvonplaten

  • 1.) Allow an option to not black out an image, but return a boolean whether it's safe or not. As said above, personally I don't see the huge use case for this + it's weakening the safety of the model even more. cc'ing @patil-suraj @pcuenca @anton-l @natolambert here

This just doesn't make sense. How is it weakening anything? It's just letting the class do what it's clearly intended to do, that was botched for some reason, which comes back to

Is this really more about liabilities, shoving it off to "community" pipelines?

It seems very obvious that the safety checker was botched, to produce only black images regardless of letting the developer using the API know it's NSFW, which makes letting them know via boolean useless. They will know, the end user will know. It will just be a black, useless image!

It's acting like a hardcoded feature that could be slipped into the pipe itself. Not a class to actualy use for something. Such as, boolean is True, this image is NSFW. Now it is up to me how to handle it. Do I reject even saving the image to disk? Do I warn the user? Do I blur the image? The possabilities go on, as it is being used as an API method, where the developer can utilize the boolean response with the image and do as they please.

Assuming the Developer using your API is stupid, is really kinda offensive. This isn't for end-users. This is inherently for developers to make something for end-users. And would then inherently be our decision what to do with a flagged image.


On a side note, I feel too much nudity gets by the censor for it to really be protection (it seems dependent on prompt words and or how well-formed the content matter is in the diffusion (which can often not be coherent out of scale with say 512x768), and needs much more work. I feel the SD model pipelines should have graphic content warnings as is for their usage.

Additionally, with negative prompts now you can spoof prompt interpretation with stuff like negative prompting clothes

WASasquatch avatar Oct 17 '22 16:10 WASasquatch

@WASasquatch, ok, I get your point about "Assuming the developer using your API is stupid" and see how our messaging can come across as such at the moment. Sorry, that's not what we're trying to communicate. It's rather the following assumption where I think it's hard to argue against:

  • If 10,000 people use this API, 9,000 will just use the default settings and never read the docs. This is always the case in OSS and API's should be designed in a way that people don't have to read the docs. Now, we don't want to have a default setting where for 90% of users nsfw images are returned -> so it makes total sense to me to always have a default that turns nsfw on
  • Now for the 10% of people that read docs and tweak the code, I usually agree with you a 100% that we should aim for an API that gives maximum freedom to the user given limits defined by our philosophy and ability to maintain code. Both the suggestion to not block images or blur images fall into this category and I would be more than happy to implement such features, but given the wider ethical considerations about "AI safety", I'm not sure how much customization we should allow (to be clear, I really don't know because I've never worked in AI safety).
  • Finally, all of the above use cases can already be solved by https://github.com/huggingface/diffusers/pull/815 in that you just don't run the safety checker at all, but just run it yourself.

=> Overall, I'm also always pro enabling downstream users with open-source libraries, so let's try to go ahead with your and @theahura's feature request and just be careful to make sure people not "blacking" out the images know what they are doing.

Does any of you would like to open a PR? I'm fine with adding a flag "nsfw_filter_type" to https://github.com/huggingface/diffusers/blob/2b7d4a5c218ed1baf12bca6f16baec8753c87557/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L24

that can take one of three values: "black_image", "blurred_image", "none" and then respectively either "black's out" the image, blurr's the image or just leaves the image as is.

The API for this could then look as follows:

from diffusers import from StableDiffusionPipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker

safety_checker = StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="safety_checker", nsfw_filter_type="none")

pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=safety_checker)

Does this work for you?

Would any of you be willing to open a PR @theahura or @WASasquatch - I'm a bit under water at the moment with other issues.

Also @WASasquatch, I'm always happy about feedback/issues, but two things:

    1. It'd be nice to be a bit more positive in your tone: "Assuming the Developer using your API is stupid, is really kinda offensive" and "Is this really more about liabilities, shoving it off to "community" pipelines?" aren't super constructive comments to come to a good conclusion/compromise. I think it's quite obvious for everybody that image-generation models are sensitive topics given their capability of producing harmful images. While I agree with most of your proposals, an accusative tone doesn't really help here
    1. While feature requests are always nice, PRs are even nicer. If you feel strongly about a feature, opening a quick draft PR is always a good idea

patrickvonplaten avatar Oct 17 '22 18:10 patrickvonplaten

Alright, and I have a PR that I was working on. I just need to read up on PRs I guess. As I only really know how to do it via forking and merge request (where it just picks up the changes you made as the PR). I had to back up and delete a fork I already had with some experiments, though. I'm not sure why it wouldn't just let me rename and fork it or something.

Regarding censoring; agree 100% that the default behaviour should indeed be a censor. I just feel that we should be able to actually utilize the NSFW concept's boolean to implement our own methods (such as blurring).

Likewise, I do feel that bloating the safety checker with optional censoring isn't needed. Returning black is probably the most "encompassing" when considering sensitive content. If they want to do Gaussian blur like I do, it's rather easy to implement.

Example:

    if pipeline.nsfw_content_detected[0] and ENABLE_NSFW_FILTER:
        print("NSFW concepts were found in your image. Censoring NSFW content.")
        image = pipeline.images[0].filter(ImageFilter.GaussianBlur(radius = 18)) # don't forget to import ImageFIlter, etc

It'd be nice to be a bit more positive in your tone: "Assuming the Developer using your API is stupid, is really kinda offensive" and "Is this really more about liabilities, shoving it off to "community" pipelines?" aren't super constructive comments to come to a good conclusion/compromise. I think it's quite obvious for everybody that image-generation models are sensitive topics given their capability of producing harmful images. While I agree with most of your proposals, an accusative tone doesn't really help here

I'm sorry this wasn't positive, but it was a concern based on negative impact. I felt that you're treating me, and other developers (who are the ones actually really using diffusers, and we do look at documentation regarding customization beyond just a prompt I feel). The average people youn seem to be citing are using the countless services and notebooks with far more features readily available than diffusers. I felt that there wasn't transparency regarding why it was actually implemented this way, looking like an original safety checker seemingly hastily modified into brute force a safety censor. And in a few topics I've monitored following this there seems to be no budge on the matter, or plans to change without much elaboration. And again, like you said

If 10,000 people use this API, 9,000 will just use the default settings and never read the docs.

I don't think that's true, and. In fact, there has been a lot of talk about lacking documentation on SD discord and other communities since release. This isn't a positive view of the community and users of diffusers. And from Hugging Face Diffusers, seemingly assuming most their users are too lazy or inept to read documentation is 😔 face not 🤗

I'll also add that in most cases, things are unfiltered, and it's always up to the developer(s) to implement filters. Whether image filters, text filters, user filters, IP filters, whatever the case. Unless it's a service to the end user. Like Dream Studio or something. Though with Stable Diffusion etc, I understand the hesitance, but where it stems from isn't really from a development perspective.

WASasquatch avatar Oct 18 '22 04:10 WASasquatch

Thanks so much for this discussion, and just popping in to say I completely agree that Hugging Face should create, and maintain, code for post-processing generated content (not limited to blurring, but that's definitely one). IIRC, the current safety/censoring approach came from discussions with Stability AI -- it definitely wasn't one I had recommended.

From my perspective, there just haven't been enough engineers around at HF with "free time" to do the more nuanced work needed here. We're creating a job description for this task (and others), but it's not approved yet + not up yet + not hired-for yet.

In the mean time, any community work on this would be beneficial for everyone (I think), and hopefully makes sense since Stable Diffusion is a community contribution. We'd want to have Stability AI agree to the approach too, since we'd be changing from what had been worked through earlier.

meg-huggingface avatar Oct 18 '22 18:10 meg-huggingface

@meg-huggingface can you comment more on the discussion with Stability AI? I was under the impression that the Stable Diffusion model is tangentially related to Stability AI at best (they don't seem to be the maintainers on the original SD repo, nor are they authors on the paper), so I'm curious why Stability AI would be involved in any discussions around usage of the model

theahura avatar Oct 18 '22 19:10 theahura

Hey @WASasquatch,

I'm sorry if you get the feeling, I'm not listening - we really are listening. As said by @meg-huggingface, the requested feature here (as said above) is ok to implement since you and @theahura want to implement it and we think it's fine as well. I currently just don't have the time to look into it. I'd be very happy to review a PR though!

patrickvonplaten avatar Oct 20 '22 18:10 patrickvonplaten

BTW, the same thing holds true for documentation in general - we would really like to improve it (even opened an issue about this yesterday: https://github.com/huggingface/diffusers/issues/915), but we currently don't find the time to do so (it'll surely improve over time though).

Again, please feel free to open a PR to improve / add new docs :-)

patrickvonplaten avatar Oct 20 '22 18:10 patrickvonplaten

@theahura, the Stable Diffusion model was trained and released by Stability AI/CompVis (see announcement here: https://stability.ai/blog/stable-diffusion-public-release <- there is no paper, that's more or less the equivalent of the paper). Stability along-side CompVis are in that regard the "authors" of stable diffusion which includes the safety checker.

patrickvonplaten avatar Oct 20 '22 18:10 patrickvonplaten

BTW @WASasquatch, if you want we can also jump on a quick 15min google hangout call to talk in person to smooth some things out?

I'm sorry this wasn't positive, but it was a concern based on negative impact. I felt that you're treating me, and other developers (who are the ones actually really using diffusers, and we do look at documentation regarding customization beyond just a prompt I feel).

-> maybe I can help with that when chatting face-to-face. GitHub communication can come across a bit too hostile I think

My email is [email protected] , feel free to leave a message if you want to chat :-)

patrickvonplaten avatar Oct 20 '22 18:10 patrickvonplaten

@patrickvonplaten Thanks for the response re Stability, but I don't think that's correct.

Paper: https://ommer-lab.com/research/latent-diffusion-models/

Which is linked clearly in the comp vis repo: https://github.com/CompVis/stable-diffusion

The actual research here was done by Runway ML and University of Munich. You'll notice that the model card is also made available by CompVis (aka Munich) not Stability AI; and that Stability AI is not a maintainer or even a contributor to the main repository. Stability AI did provide free GPUs and infra support, much like AWS or Coreweave.

I know this seems pedantic, but I have been following the development of stable diffusion for a bit and I am concerned that Stability AI has incentives to claim they are more involved than they are and to limit the way these models end up being used. I strongly encourage the maintainers of this library to speak to the authors and the creators of the model who open sourced it, instead of the private company.

theahura avatar Oct 20 '22 18:10 theahura

Actually I didn't about this paper - thanks for pointing it out here. To be honest, I also don't know the fine-print here exactly - I do think though that stability AI has the IP rights for the stable diffusion checkpoints (I'm not 100% sure though) as it was trained on their cluster and also in the README of stable diffusion, it's quoted:

Stable Diffusion was made possible thanks to a collaboration with Stability AI and Runway and builds upon our previous work:

Not really sure though about the fine-print, maybe @apolinario you know better here?

patrickvonplaten avatar Oct 20 '22 18:10 patrickvonplaten

I am pretty sure they DONT have any IP rights at all (and I'm very concerned if they are claiming otherwise). Note that the license on the compvis repo is made out to Robin Rombach (Munich) and Patrick Esser (RunwayML): https://github.com/CompVis/stable-diffusion/blob/main/LICENSE#L1

In light of the recent controversy between Stability and the creators of Stable Diffusion (see: https://news.ycombinator.com/item?id=33283712 and https://news.ycombinator.com/item?id=33279290) I think it's important to strongly consider the incentives behind how people say safety should work (including my own! take what I say with as much a grain of salt as everything else! :joy: )

theahura avatar Oct 21 '22 05:10 theahura

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Nov 14 '22 15:11 github-actions[bot]

keeping this thread open -- hasn't been resolved yet. I'd be fine opening a PR but haven't had time to do so

theahura avatar Nov 14 '22 15:11 theahura

In light of the recent controversy between Stability and the creators of Stable Diffusion (see: https://news.ycombinator.com/item?id=33283712 and https://news.ycombinator.com/item?id=33279290) I think it's important to strongly consider the incentives behind how people say safety should work (including my own! take what I say with as much a grain of salt as everything else! 😂 )

-- Irrelevant -- Probably reasons they do not implement safety features like this in any DCC art tools, and leave it to individual interpretation and complaint, and ultimately law enforcement. It reminds me of how it's not illegal to make copies of movies or TV in private, but VHS manufacturers and DVD manufacturers implemented safety features to ruin recordings if it was detected with alternating brightness, or other inhibiting features -- where in places like the UK it was required that manufacturers provide a work-around for copy-protection for lawful acts, like in-home backups of paid for copyrighted products (you pay for the rights to that physical copy as property to yourself).

-- On-Topic -- What exactly is the hold up here? Have we decided what we want the safety checker to do? Allow a flag to be disabled, which disables blacking out a image? Replace black-out with Discord Beta Era Gaussian blur? Another idea? I'd be happy to pull and push a PR, but it isn't clear what we all want to settle on exactly.

I will add that I think a lot of people don't actively push PRs because of how development works here, with long waits, and having the PR author maintain their code through various updates before it's implemented and handled with said updates. It's actually a bit of a hassle, especially for something tiny and a couple lines (though I don't perceive the safety checker getting modifications in place of a update like this proposed PR).

WASasquatch avatar Nov 14 '22 23:11 WASasquatch

I was ok with @patrickvonplaten's proposal, and thought it neatly solved everyones concerns. See: https://github.com/huggingface/diffusers/issues/845#issuecomment-1280541219

Patrick offers implementing 'blurred image' as a default, but I would rather the options be "enabled", "warn_only", "none", where:

  • "enabled" == "black out images and give a boolean array indicating which ones were NSFW",
  • "warn_only" == "give the full images and a boolean array indicating which ones were NSFW"
  • "none" == "give the full images and dont run any safety checking (return a boolean array of all False values)"

theahura avatar Nov 15 '22 01:11 theahura

Hey,

Yeah this issue spiraled a bit out. So it's already possible to disable the safety checker as shown here: https://github.com/huggingface/diffusers/pull/815

Meaning the "none" option is covered.

Now what could make sense is to add a config attribute to the safety checker config here that is called:

safety_mode="blurred"/"blocked"

This can be implemented here: https://github.com/huggingface/diffusers/blob/195e437ac511f169d36b033f01e0536ce7ea1267/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L38 by adding an optional init argument or/and to add a set_safety_mode function.

I hope this is clear enough in terms of how to open a PR - would anybody of you like to do this? I won't have time to look into this anytime soon I think.

patrickvonplaten avatar Nov 18 '22 12:11 patrickvonplaten

Two thoughts

None for the safety_checker yields warnings which must also be disabled. Something that is unintuitive for a static system someone must set a specific way themselves for that functionality.

A config attribute looks nice in theory, but most people do not instantiate the safety checker, so it seems all this would be added to the pipe creation, and mean editing all the pipes.

It might be easier to just use a internal variable, defaulted to True, and the user could just disable it: pipe.safety_chcker.disable_censor = True

Not sure, but it seems like the easier use-case scenario for most people already using the system.

On Fri, Nov 18, 2022 at 4:07 AM Patrick von Platen @.***> wrote:

Hey,

Yeah this issue spiraled a bit out. So it's already possible to disable the safety checker as shown here: #815 https://github.com/huggingface/diffusers/pull/815

Meaning the "none" option is covered.

Now what could make sense is to add a config attribute to the safety checker config here that is called:

safety_mode="blurred"/"blocked"

This can be implemented here: https://github.com/huggingface/diffusers/blob/195e437ac511f169d36b033f01e0536ce7ea1267/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L38 by adding an optional init argument or/and to add a set_safety_mode function.

I hope this is clear enough in terms of how to open a PR - would anybody of you like to do this? I won't have time to look into this anytime soon I think.

— Reply to this email directly, view it on GitHub https://github.com/huggingface/diffusers/issues/845#issuecomment-1319913558, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIZEZOULKYFPXE44TMOZCLWI5WQVANCNFSM6AAAAAARFL3ZKA . You are receiving this because you were mentioned.Message ID: @.***>

-- Sincerely, *Jordan S. C. Thompson *

WASasquatch avatar Nov 18 '22 17:11 WASasquatch

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Dec 13 '22 15:12 github-actions[bot]

keeping this thread open -- hasn't been resolved yet

theahura avatar Dec 13 '22 23:12 theahura

If someone wants to give this a shot, we could / should add a safety_mode="blurred"/"blocked" to the safety checker configuration that allows it to return the images in different ways. Happy to review a PR for it (cc @theahura )

patrickvonplaten avatar Dec 19 '22 12:12 patrickvonplaten