InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

feat: nsfw/watermark updates

Open psychedelicious opened this issue 1 year ago • 3 comments

Summary

Currently, NSFW checking doesn't work on new installs due to a catch-22. See #6252.

I had an idea to move NSFW and watermark to be config settings, then do the check/watermark in the invocation API as images are saved. The NSFW check and watermarking are thus fully automatic and transparent, and works on workflows too - without any user changes. This was really easy to implement.

It works well, but there is a problem on canvas where a graph does many image-saving operations.

For example, an inpaint graph does at least 6x image saves: 4x resize, 1x VAE decode and 1x paste-back. Each time, the image is checked - possibly blurred - and watermarked.

Watermark changes the image

Watermarking subtly changes the images each time an image is saved and this introduces some chaos which impact how the model handles the images. The final output is markedly different than what you would have gotten without any watermarking.

NSFW detection early borks the rest of generation

If an early image in the "chain" is NSFW, it still gets passed along. There are two possible outcomes:

  1. The final image is still NSFW, and is the result of blurring the image, adding the caution symbol, blurring that image again and adding the caution symbol again, and so on. The result is a super-blurred image with blurred caution symbols and then a sharp caution symbol on top.
  2. At some point the image was determined to no longer be NSFW (because it was blurred and a caution symbol put on it, and this was enough to change the determination). Then that gets used as the input to canvas, and the final result is this black and yellow smudge that somewhat follows the original prompt.

Other ideas

Ok, so this isn't viable. Some other ideas:

  1. Remove both NSFW check and watermark entirely.
  2. Just fix the model download and leave NSFW and watermark as part of the graphs, and a UI setting.
  3. Raise a NSFWImageError on NSFW detection - IMO, this is how it should work anyways. The user would get a toast with the error.
  4. Only do the check and watermark on terminal/leaf nodes.

Related Issues / Discussions

Closes #6252 Closes #6092

QA Instructions

n/a for now

Merge Plan

n/a

Checklist

  • [x] The PR has a short but descriptive title, suitable for a changelog
  • [ ] Tests added / updated (if applicable)
  • [x] Documentation added / updated (if applicable)

psychedelicious avatar Apr 23 '24 22:04 psychedelicious

@psychedelicious If you like I'd be happy to work on this after getting the model manager API updates in. Both the NSFW and watermarking features do have real use cases, even if they aren't used all that frequently.

lstein avatar Apr 25 '24 02:04 lstein

@lstein Sure thing.

After thinking about it more, I'm leaning towards:

  • If watermark is enabled in config, watermark every image. This is a already done in the PR. As described, the watermarking can change outputs, but I think this is reasonable. The expectation is that watermarking applies to all outputs, so we can't really get around it changing images. And this is the only sane way to support watermarking in the workflow editor.
  • If nsfw_check is enabled in the config, raise a NSFWImageDetectedError to immediately fail the graph. My thinking is, if you want to check for NSFW and NSFW is detected, there's no point in continuing with that generation - it should immediately stop.

If that makes sense, there would only be some minor changes needed for this PR.

psychedelicious avatar Apr 25 '24 03:04 psychedelicious

@lstein Sure thing.

After thinking about it more, I'm leaning towards:

  • If watermark is enabled in config, watermark every image. This is a already done in the PR. As described, the watermarking can change outputs, but I think this is reasonable. The expectation is that watermarking applies to all outputs, so we can't really get around it changing images. And this is the only sane way to support watermarking in the workflow editor.
  • If nsfw_check is enabled in the config, raise a NSFWImageDetectedError to immediately fail the graph. My thinking is, if you want to check for NSFW and NSFW is detected, there's no point in continuing with that generation - it should immediately stop.

If that makes sense, there would only be some minor changes needed for this PR.

This sounds reasonable. I'll see if I can get this working later this week.

lstein avatar Apr 28 '24 19:04 lstein

After thinking through things, this approach isn't viable. We need NSFW and watermark to be user-configurable via the UI, it cannot be something that is set in the config file.

This is superseded by #6360.

psychedelicious avatar May 13 '24 08:05 psychedelicious