site
site copied to clipboard
List Bot Tags On Site
Closes #687.
I implemented this feature in the most efficient way (in terms of network requests) possible. We download the entire bot project as a tarfile, extract all the tags from it, and cache them to the database.
Ideally, I'd also like to add a bit more flavor-info to the pages, such as when they were last updated, and the authors that have worked on them, but the only way to get that data from the API is to request data for a specific commit, which means making ~100 requests in total. We can't do that without being rate-limited, but it could be something to consider. I also considered downloading the git repo (as opposed to the content only), but it doesn't seem the API returns that, we'd need to clone with git.
Speaking of the 100 tags, I've gone through quite a few and they all look fine. They all should render fine, since the static deployment hits them all, but there could be some formatting oddities that I have missed. Feel free to tell me if so.
Deploy Preview for pydis-static ready!
Name | Link |
---|---|
Latest commit | a71a595ecf4e83db94285e762821e71396a15cc7 |
Latest deploy log | https://app.netlify.com/sites/pydis-static/deploys/636630cca8ddb00009068534 |
Deploy Preview | https://deploy-preview-763--pydis-static.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site settings.
Coverage remained the same at 100.0% when pulling a71a595ecf4e83db94285e762821e71396a15cc7 on bot-tags into 5c23e35dfe4f934722fe680298b7a3bdd3bc5447 on main.
Closes #687.
I implemented this feature in the most efficient way (in terms of network requests) possible. We download the entire bot project as a tarfile, extract all the tags from it, and cache them to the database.
Ideally, I'd also like to add a bit more flavor-info to the pages, such as when they were last updated, and the authors that have worked on them, but the only way to get that data from the API is to request data for a specific commit, which means making ~100 requests in total. We can't do that without being rate-limited, but it could be something to consider. I also considered downloading the git repo (as opposed to the content only), but it doesn't seem the API returns that, we'd need to clone with git.
Speaking of the 100 tags, I've gone through quite a few and they all look fine. They all should render fine, since the static deployment hits them all, but there could be some formatting oddities that I have missed. Feel free to tell me if so.
What about the tag "rule <#>"
What about the tag "rule <#>"
Those aren't tags, they're a command in and of themselves. Also, the rules are already listed in our website (it's what the rules command links to).
@Numerlor I was not aware of the tag groups, is there an example of them currently?
@Numerlor I was not aware of the tag groups, is there an example of them currently?
I don't think any are currently used or that there are any real examples, but they're created through the directories I mentioned:
tags/
├── tag_group1/
│ └── tag.md
├── tag_group2/
│ └── another_tag.md
└── plain_tag_without_group.md
this structure would create two tag groups tag_group1
and tag_group2
with tag.md
and another_tag.md
in them respectively
Thanks for the info, I'll work on implementing it
Tag groups added in 45cdb27a. Due to the way tags are loaded in static previews, it was pretty easy to add in some under a fake group. Look for "very-cool-group" (these are sorted alphabetically like all the other tags, but we can place tag groups at the top of the list if we want to).
Live example: https://deploy-preview-763--pydis-static.netlify.app/pages/tags/very-cool-group/
Ideally, I'd also like to add a bit more flavor-info to the pages, such as when they were last updated, and the authors that have worked on them, but the only way to get that data from the API is to request data for a specific commit, which means making ~100 requests in total. We can't do that without being rate-limited, but it could be something to consider. I also considered downloading the git repo (as opposed to the content only), but it doesn't seem the API returns that, we'd need to clone with git.
This should be relatively simple to cache; the contents
api on a directory returns all the files with their git blob SHA. If that is saved the individual requests for commits to a file can only be done if the new SHA differs from the saved one. Full content to the contents endpoint could also be avoided as it provides the ETag header but the download size there should be fairly small
The cache solution sounds smart, I'll look into that. In terms of downloading, I'm not sure there's a good way to bulk download files without downloading the entire repo, did you have a solution in mind?
The cache solution sounds smart, I'll look into that. In terms of downloading, I'm not sure there's a good way to bulk download files without downloading the entire repo, did you have a solution in mind?
I think the best solution is to just use the raw download urls from the contents api call. I don't believe count into the rate limits either so the only disadvantage is that is needs to be individual requests instead of bulk for the changed files, but that should only be a disadvantage when filling up the cache, as after that the changes will be mostly to individual files.
The GraphQL api can fetch all the files and their contents together, but I'm not sure if that can then be further filtered when not all files are necessary, and it needs auth
I added in the commit stuff, I think it turned out decently.