nsfwjs icon indicating copy to clipboard operation
nsfwjs copied to clipboard

False positives on Coffee cup images

Open adnankhan37 opened this issue 3 years ago • 12 comments

Hello,

We have been using Inception v3 for NSFW detection and we are getting false positives on coffee cup images.

image image image image image

Any suggestion on how can we tackle this?

adnankhan37 avatar Feb 28 '22 17:02 adnankhan37

WOW! This is a great find. Which model are you using? The default? Does this happen on ALL the models in the dropdown?

GantMan avatar Feb 28 '22 23:02 GantMan

I am using Inception v3. Tried the mobilenet v2 also and it seemed to perform well in some cases like this.

image

For this image, I tested it on Mobilenet v2 and it gave me a higher probability of being porn than inception v3.

image

I tested the following image also on Mobilenet v2 and it is giving a false positive.

image

This seems to be happening with all the models in the dropdown.

adnankhan37 avatar Mar 01 '22 05:03 adnankhan37

This is good to know. I'll add these to the next training set.

GantMan avatar Mar 01 '22 05:03 GantMan

Since I am actively using the Inception v3 model in my project, may I know when will I be able to download the updated Inception v3 model trained on these images? :)

Thanks!

adnankhan37 avatar Mar 01 '22 07:03 adnankhan37

Did you say that one of the other models was better at not being tricked? Because I'm going to add a voting classifier hat will help.

GantMan avatar Mar 01 '22 17:03 GantMan

Yes, that would be Mobilenet v2, but just in a few cases, it performed better than Inception v3. But when Mobilenet v2 gave us a False Positive for an image, the probability it gave was higher than the one given by the Inception v3 model.

adnankhan37 avatar Mar 01 '22 17:03 adnankhan37

Hi,

I might have an idea of what is going wrong with the coffee/tea cup images. I found the following repository of the NSFW dataset with girls holding teacups which you have mentioned in the readme file.

Link: https://github.com/EBazarov/nsfw_data_source_urls/tree/master/raw_data/appearance/reddit_sub_TeaGirls

Can that be a possible reason for us getting False Positives on the cup images?

Thanks!

adnankhan37 avatar Mar 04 '22 04:03 adnankhan37

Ahhhh, that makes sense. Strange! Yeah, if it would be important to counter-balance this kind of data so the AI doesn't form a bias against teacups.

GantMan avatar Mar 04 '22 05:03 GantMan

Yes, that sounds great. It would be quite helpful if the model is trained on teacup images and then we can use that updated model.

Thanks!

adnankhan37 avatar Mar 04 '22 07:03 adnankhan37

Hi, is there any update on the training? I found more false positives on the following images. I tested them on Inception v3. Thanks!

Screenshot 2022-03-07 164702 Screenshot 2022-03-07 163152 Screenshot 2022-03-07 162333

adnankhan37 avatar Mar 14 '22 11:03 adnankhan37

I've got a few other tasks before retraining this model on the new images. My goal will be to create some metrics for retraining. Unfortunately those are behind ANOTHER item I need to complete. If this is a blocker I highly recommend you don't wait for me, and perform transfer learning to fix any bias you find.

Since this is a free open source project, the progress is relegated to my free time for OSS.

GantMan avatar Mar 14 '22 14:03 GantMan

Sure, thanks for the help!

adnankhan37 avatar Mar 14 '22 15:03 adnankhan37