nips
nips copied to clipboard
NIP-56: Reporting
This NIP provides a way for clients to report profiles and notes to relays.
This is currently implemented by Damus as an apple app store requirement, but may be useful in other moderation contexts.
https://github.com/nostr-protocol/nips/blob/reporting/56.md
thanks for putting this together! looks good to me. pretty straightforward.
do you think it would be useful to make the report type queryable? maybe just use 'r' tag instead of 'report'?
thanks for putting this together! looks good to me. pretty straightforward.
do you think it would be useful to make the report type queryable? maybe just use 'r' tag instead of 'report'?
not really
If a report is many types in one (e.g. illegal and explicit), is it recommended to send separate events or just more than one "report" tag?
explicit
- explicit content which may not be appropriate for minors: nudity, profanity, etc
Do we want someone posting hardcore porn and someone saying "fuck" on a tweet to generate the same report signal? I think it would be more useful for other clients if those two cases could be separate report types.
A use case I can think of is hiding NSFW profiles in "work mode" but not hiding users and post that say "shit, the weather sucks today".
Just here to lol @ kind:1984
What would we think about adding a hint for relays to keep a report private? It could be private
(for relay eyes only) or trusted
(for friends' eyes only) or public
? Reporting bad behavior can be tricky, and some people may avoid reporting a genuine issue if they feel that public reports would open them to retaliation or harassment.
If a report is many types in one (e.g. illegal and explicit), is it recommended to send separate events or just more than one "report" tag?
probably makes sense to just send more report tags.
Going with @Giszmo suggestion of having the report tag types tied to the p and e tags of the thing that is being reported. it's more clear.
I split explicit into profanity and nudity.
What would we think about adding a hint for relays to keep a report private? It could be
private
(for relay eyes only) ortrusted
(for friends' eyes only) orpublic
? Reporting bad behavior can be tricky, and some people may avoid reporting a genuine issue if they feel that public reports would open them to retaliation or harassment.
nostr is not very private in general, private reporting would have to be a separate nip.
@Semisol Can you live with the complaint being human readable and switch to ACK?
the jurisdiction applicable for the report, specifically wrt illegal content feels like information that is required that is currently omitted. e.g. nudity in french jurisdiction is also illegal in UEA jurisdiction.
If no one objects I'll merge this today.
To be honest, I think this draft is problematic.
Specifically, my issue is with:
A
report type
string MUST be included as the 3rd entry to thee
orp
tag being reported, which consists of the following report types:
nudity
- depictions of nudity, porn, etc.profanity
- profanity, hateful speech, etc.illegal
- something which may be illegal in some jurisdictionspam
- spamimpersonation
- someone pretending to be someone else
Why? What's so special about those categories of content?
Wanting to ban illegal content and spam is rather universal. The first is necessary if you don't want the police to knock at your door, the second if you want your service to remain useful. Impersonation is also often illegal.
But nudity and profanity seem completely arbitrary to me. This is not to say banning either of them is a bad idea. And it's not to say many wouldn't want to ban to them. It is to say, however, that it's an entirely subjective and circumstantial multi-factor decision, and some that other kinds of content may be just as ban-worthy to other people.
And while I understand Damus specifically needed this for compliance with Apple's terms of use, I don't believe the TOS of a specific company should affect what goes in the specification of the protocol.
There should, instead, be some system to report arbitrary kinds of content to those relays that ban that specific kind of content. The protocol itself should be neutral in regards to what content is or is not "ban worthy".
And as for "illegal", I should mention that it means almost exactly nothing if a specific jurisdiction isn't specified. Laws are territorial.
Before seeing this proposal, I have thought of a report system which wouldn't require the NIP itself mentioning some specific kinds of content which offend the whims of some specific company.
I most surely don't claim, or even remotely suggest, that I could do a better job than jb55, but I do feel very uncomfortable with the report types.
On Mon, 27 Feb 2023 21:32:33 -0800 Valentino Giudice @.***> wrote:
To be honest, I think this draft is problematic.
Specifically, my issue is with:
A
report type
string MUST be included as the 3rd entry to thee
orp
tag being reported, which consists of the following report types:
nudity
- depictions of nudity, porn, etc.profanity
- profanity, hateful speech, etc.illegal
- something which may be illegal in some jurisdictionspam
- spamimpersonation
- someone pretending to be someone elseWhy? What's so special about those categories of content?
Wanting to ban illegal content and spam is rather universal. The first is necessary if you don't want the police to knock at your door, the second if you want your service to remain useful. Impersonation is also often illegal.
But nudity and profanity seem completely arbitrary to me. This is not to say banning either of them is a bad idea. And it's not to say many wouldn't want to ban to them. It is to say, however, that it's an entirely subjective and circumstantial multi-factor decision, and some that other kinds of content may be just as ban-worthy to other people.
And while I understand Damus specifically needed this for compliance with Apple's terms of use, I don't believe the TOS of a specific company should affect what goes in the specification of the protocol.
There should, instead, be some system to report arbitrary kinds of content to those relays that ban that specific kind of content. The protocol itself should be neutral in regards to what content is or is not "ban worthy".
And as for "illegal", I should mention that it means almost exactly nothing if a specific jurisdiction isn't specified. Laws are territorial.
I would go further and say illegal requires citing explicitly the law in question. that would greatly benifit from a having an explicit strict structured form, so you dont just reference "the law of it hurts my sensibilities section 9001, in the criminal code of i dont like this post". having the jurisdiction AND laws infringed on is really not optional when making assertions that have legal ramifications.
Before seeing this proposal, I have thought of a report system which wouldn't require the NIP itself mentioning some specific kinds of content which offend the whims of some specific company.
I most surely don't claim, or even remotely suggest, that I could do a better job than jb55, but I do feel very uncomfortable with the report types.
-- ~jeff
I would go further and say illegal requires citing explicitly the law in question.
Folks, this NIP is just a flagging event. Relays and other providers/clients should aggregate these reports, review them, and take appropriate action. Users don't have the ability to provide a legal opinion about a matter. They can just provide feedback on the content they see and don't want to see anymore and/or don't want their followers to see it. The rest is for other entities in the field and law enforcement to navigate.
@Aspie96 I agree with you. We can reword the NIP to say the report-type reason could be anything, and list some examples. We can mention the fact that relays might have arbitrary policies and that these arbitrary policies could be used in reports and as basis for banning pubkey from these relays. We should do that.
If you can submit a PR making these modifications that will be great.
@majestrate I don't agree with you. Nostr is not a legal court, it's a neutral system. The laws of the United States of America have pretty much the same validity as the laws of It Hurt My Feelings here. Everything is subjective to the parties involved.
I would go further and say illegal requires citing explicitly the law in question.
Not sure if I agree.
I think many people know that something is illegal without knowing the specific article of the specific law.
In fact, to be able to operate in one own's jurisdiction, one ought to have some understanding of the laws they have to follow, and therefore be able to identify illegal behavior, but it doesn't mean they know the reference to any specific law.
My point stands, however, that "illegal" doesn't mean anything if a specific jurisdiction isn't mentioned.
@Aspie96 I agree with you. We can reword the NIP to say the report-type reason could be anything, and list some examples
My issue is with the report type being a word, as it currently is.
This necessarily requires the meaning of those words to be specified either in NIP-56 or in some other NIP.
There are potentially infinite rules one could apply. And there are many kinds of content one could very reasonably want to forbid which do not follow in any of the above categories.
I regret not seeing this NIP before and not developing my idea previously. I will try to do my best to write something. I will ask for feedback before sending a pull request, however.
I might open a discussion, if that's ok, so that I don't spam this PR with incomplete ideas about another potential PR.
On Tue, 28 Feb 2023 07:23:28 -0800 fiatjaf_ @.***> wrote:
@majestrate I don't agree with you. Nostr is not a legal court, it's a neutral system. The laws of the United States have pretty much the same validity as the laws of it hurt my feelings here. Everything is subjective to the parties involved.
i think there is a lot of room for improvement here. adding structured metadata to reports when content is illegal would be a boon to those making tooling. it would be able to make legal complaince in every jurisdiction SO much easier. given the direction of the winds of change, this is something we will need in the future regardless of how we feel about it.
i'll just state that i think this "every law is just as valid as my feelings" stance is a bad default stance. the biggest gripe with decentralized moderation as implemented in every other system i have seen is this exact sentament, ambiguity of report severity. adding explicit and well defined structure to reports will only do good for the future tooling because someone posting a swear word is absolutely not the same severity as CSAM.
-- ~jeff
a swear word is absolutely not the same severity as CSAM
CSAM is illegal pretty much everywhere, but that is the exception, not the rule. Most laws differ by jurisdiction.
And even in the case of child pornography, what even counts as child pornography can change quite wildly, with the exact same image being a criminal liability in one jurisdiction and perfectly in the clear in another.
And a "swear word" could totally be a legal liability depending on context and jurisdiction.
Do I feel that CSAM are worse morally than swear words? Yes. Do I want that feeling of mine to affect the protocol? No.
a swear word is absolutely not the same severity as CSAM
CSAM is illegal pretty much everywhere, but that is the exception, not the rule. Most laws differ by jurisdiction.
And even in the case of child pornography, what even counts as child pornography can change quite wildly, with the exact same image being a criminal liability in one jurisdiction and perfectly in the clear in another.
And a "swear word" could totally be a legal liability depending on context and jurisdiction.
Do I feel that CSAM are worse morally than swear words? Yes. Do I want that feeling of mine to affect the protocol? No.
that is why it is important to clarify what constitutes the claimed illegality and where it applies, because insulting the prophet Mohamed is in fact almost just as illegal in some parts of the world.
something to note, not all jurisdictions are compatible with each other, defaulting to assume everyone is in the sphere of influence of the USA blocks you off from very sizable regions online, when being able to interoperate in those regions can be possible using proper classification on reports. adding such opens up the protocol to being accessible in so many more places as infra can have low enough risk to host in more prudish countries.
This necessarily requires the meaning of those words to be specified either in > NIP-56 or in some other NIP.
What is the problem with it being a word? Various words could be used by clients and relays according to whatever they think it makes sense. Once some word becomes a de facto standard we can modify this PR to add it, otherwise it can live as a de facto standard outside of the NIPs too.
My issue is that without at least a short definition it would probably be way too vague.
As for including future de-facto standards in the repository, it would be worse, since it may be seen as endorsement of those rules, but some may be rather terrible.
For example, many relays, in many countries, could forbid anything related to homosexuality. I don't know if it will happen, but it's at least very possible.
Those who think such rules are completely and utterly unreasonable, perhaps even immoral, wouldn't want them in the repo, effectively requiring a vetting process for new future rule identifiers.
Regardless, the current NIP expressly says that the report type "consists of the following [...] types", so using another type name would actually be a violation of the NIP-56 specification.
I opened a discussion about this: https://github.com/nostr-protocol/nips/discussions/309
(I used a discussion instead of commenting here because I am no longer addressing the text of this PR directly).
Regardless, the current NIP expressly says that the report type "consists of the following [...] types", so using another type name would actually be a violation of the NIP-56 specification.
That's why I asked you to change the NIP to soften that rule.
I just checked my relay for reports and I only found reports on notes that are not even stored on my relay. Wouldn't it make sense to recommend in this NIP to send the report only to relays that actually have stored the note in question? Of course those reports can easily be filtered out on the relay itself as well. So another possibility is to recommnd that a relay does not need to store a report, if it does not have stored the event in question.
Yes, both these things make sense. If you want to submit a patch adding these two recommendations that will be great.