podsync
podsync copied to clipboard
Feature Request: Add robots.txt to block search engine indexing
Description:
When podsync is hosted on a public domain, the generated podcast feeds and episode pages are exposed to search engine crawlers. This can cause them to be indexed and also result in unnecessary bandwidth consumption, especially if crawlers attempt to fetch video files.
Proposed Feature:
Add an option to serve a robots.txt file. A sensible default could be:
User-agent: *
Disallow: /
The content could be user-configurable via the podsync settings.
Benefits:
- Helps prevent unnecessary crawler traffic and reduces bandwidth usage.
- Provides users with more control over how their podsync instance is exposed.
- Simplifies setup by handling
robots.txtwithin podsync, rather than requiring external web server configuration.
Alternatives:
At present, users could certainly serve their own robots.txt, but integrating this directly into podsync would streamline the process.
Additional Consideration:
As an alternative or complement, support for sending the X-Robots-Tag HTTP header could also be useful since some crawlers respect that.