redlib
redlib copied to clipboard
💡 Feature request: Disable web crawlers by default
Is your feature request related to a problem? Please describe.
I was sent a DMCA (along with other Redlib instance owners) by WebToon and Google because of a post on Reddit that the web crawler visited through Redlib
Describe the feature you would like to be implemented
I'd like the crawler protection to be enabled by default
Describe alternatives you've considered
There aren't any alternatives other than captchas
Additional context / screenshot
I specifically just mean the robots.txt file should disallow all by default and have crawling as an option