metacatui
metacatui copied to clipboard
Implement portal moderation & anti-spam measures
There's a growing trend of spam-related content being uploaded to public portals. Temporary measures to mitigate the issue include setting these portals to private and reporting associated ORCID accounts. These are only short-term solutions and a more sustainable approach is needed to prevent spam content from being uploaded in the first place.
Potential enhancements to address reduce spam content include:
-
Approval requirement for public portals: Add an approval or moderation step for portals before they go public. This could involve a review system where new portals are initially set to private and require screening by a moderation team before being published. (Related to the planned improvements to the publication workflow).
-
Notification system & post-creation moderation: Implement a notification system to alert moderators when new portals are created. Moderators could then review the portal content and remove/make private any spam-related content. This would reduce friction for legitimate users while maintaining a level of oversight. (Somewhat related to the planned dataset notification service feature)
-
Verification requirement for portal creators: Limit portal creation to verified users. This would entail developing a method to verify ORCID accounts to ensure they correspond to legitimate individuals. We might consider reviewing ORCID's dispute and spam handling policies as a reference.
-
Automated spam detection and filtering: Explore automated systems to detect and filter potential spam content at the portal creation stage. This might involve algorithmic checks against known spam indicators or patterns.
-
User reporting and feedback mechanisms: Implement a system for users to report suspicious content or behaviour. This could include a "report" button on portals, allowing community members to flag potential spam for review.