beacon
beacon copied to clipboard
Webfiles and Metatags: Generate `robots.txt`
Generate robots.txt for sites.
Each site will have its own robots.txt which must be resolved dynamically by adding a route /robots.txt
to https://github.com/BeaconCMS/beacon/blob/7790eb72769a026c0bdfc3167aab394a9b73ce91/lib/beacon/router.ex#L79
A request to that route should call Beacon.Lifecycle.generate_robots_txt/1
which will provide a default implementation that should work for most scenarios:
# http://www.robotstxt.org
User-agent: *
Sitemap: #{endpoint.url()}/sitemap.xml
generate_robots_txt/1
should receive site
as argument and call Elixir.Beacon.Config.fetch!(site).endpoint.url()
to fetch current site url to be used as prefix for sitemap.xml location.
Then that content should be served as txt.
Depends on #169
Refs
- http://www.robotstxt.org
- https://developers.google.com/search/docs/crawling-indexing/robots/intro
Depends on #95
Generating the file itself is trivial but we need to research how to serve it on multiple domains
@AZholtkevych we can remove the dependency of #95 and add dependency on #169