NutScript
NutScript copied to clipboard
Update Credits page UI
Motivation:
Developers for NutScript are numbered many, the limited, simple and (in my opinion) amateur design previously done for the credits page did not give the proper dues to all contributors. This new design gives whoever works on the project a place in the framework.
Challenges:
From the beginning I wanted to show the GitHub avatars from the contributors, this poses a significant challenge as GitHub api/cdn does not support TLS 1.0 (for obvious reasons), and the Awesomium branch only supports up to 1.0. So I was left with two options,
- Make a website that got the contributors avatars and then serve them over TLS 1.0 (I believe this is the issue and could solve it)
- Only support avatars on the x86-64 branch of Garry's Mod
I've picked option 2 as of now. So I've had to create two layouts for the credits page as shown below. Technically there is a third option, but I'm not doing it.
Layout for x86-64 branch:
Layout for 'main' branch:
Closing Remarks:
This is posted as a Draft PR firstly because of the possibility of changing to option 1 before merging, and secondly because the UI currently has minimal scaling, importantly, the fonts don't scale, the logo doesn't scale, and the contributor boxes don't scale. (this could be a minor issue though as the scaling on the old credits page was not much, if at all better)
This is the first in refreshes to the UI I have planned.
This looks pretty cool. +1
+1
Like the idea, encourages people to contribute.
Will definitely need to utilize that exclude list for some of the people who are community banned though.
I'd also suggest having some people, such as the actual creators (and maintainers) of the framework, some special treatment.
An alternative way to handling this without the use of DHTML, and so relying on x86-64, would be to fetch the avatar url just like you are doing and then fetching the contents of the image, creating a material out of it and caching the image to the data folder. Compare the image checksum to the one stored on the server to see if they match and if not then just invalidate the cache and redownload.
An alternative way to handling this without the use of DHTML, and so relying on x86-64, would be to fetch the avatar url just like you are doing and then fetching the contents of the image, creating a material out of it and caching the image to the data folder. Compare the image checksum to the one stored on the server to see if they match and if not then just invalidate the cache and redownload.
Yep, that was the third option I was debating.
I figured though that, that would add too much unneeded complexity to such a simple plugin and that making the special layout only work on x86-64 incentivizes that branch even more.
It's still on the table if we truly care about the main branch like that. (personally I don't, which is why I shied away from it here)
I'd also suggest having some people, such as the actual creators (and maintainers) of the framework, some special treatment.
With the latest commit I've gone ahead and added sections for Creators (Chessnut, rebel1324) and current Maintainers. It can be expanded to fit others if needed, I've also made some progress on scaling as well (although remains mostly untested).
This is how it looks:
I've also changed how cl_helps.lua lua functions slightly (it may cause bugs, needs more testing, preferably with more help page entries), before it would destroy the credits panel and create it again. (in-fact it called the function to get the page html twice, so I've made it better in that regard by only calling it once).
I still have yet decided on if I want to work on making avatars work for the main
branch using the method that @armdupe suggested. I was also thinking that it might be worth to set up our own cache server for these avatars/contributors so we can change on the fly what avatars are up to date, and what users are shown in the list (this could prevent malicious contributors from making their avatars/usernames not fit for showing up in the list) without relying on making servers keep the Lua exclude list updated. (this may not be needed if contributors can be removed from showing up on GitHub though, I don't know) I would like to hear if it's truly necessary first.
The method you are doing currently is definitely the most optimal. My suggestion was just a plausible solution to the Awesomium rendering incapability and definitely not an efficient thing to do considering having to chunk all the data, IMaterial expenses, etc etc. I think you should continue to go with the DHTML route of streaming the images hosted by GitHub's CDN. The UI looks really good, it's simplistic and neat--an all around good design scheme.
It might be best to limit it from 100
per page to 50
as that is a greater cut off area at the moment for one time contributors (somewhat will help to filter out and negate someone from contributing and setting their name or avatar to something inappropriate). As for an exclusion list, there could be an external txt or json file which gets fetched from within the main repository to keep it all compartmentalised rather than relying on a cache server. All it would take is for someone to be a nuisance and to try DDoS the cache server, which is why I think it should be stored within the repo or maybe a gist?
I suggest keeping the font consistent with the default NS fonts. For instance, refer to the font used for the f1 categories on the top and the one used for "Creators" and "Maintainers"
@armdupe
It might be best to limit it from
100
per page to50
Agreed, will change with the next commit ^^
[exclusion list,] there could be an external txt or json [...] within the repo or maybe a gist?
Yep this sounds like a good plan to me, I think a separate gist would be the best bet since it's not directly related to the development of the framework.
@TovarischPootis
I suggest keeping the font consistent with the default NS fonts. For instance, refer to the font used for the f1 categories on the top and the one used for "Creators" and "Maintainers"
I do agree with this sentiment as 'Roboto' is not used anywhere else in NS (except for the old credits page), and I definitely am willing to change it, however the default font (being Arial), is a font that is better reserved for cases where it should be heavier (and therefore more indicative of a Button or large presence).
In this instance (a sub page (help menu) of a sub page (f1 menu)) I feel should have by now should have thinner fonts to ease that eye strain from looking at thick/Arial fonts the entire time (breaking up the flow is very important to good UI).
I played around with the secondary font used in the framework by default 'Segoe UI' and found that 'Segoe UI Light' looks as good as 'Roboto Th' and breaks up the presence and gives good readability to the page itself. Here's what that looks like:
With the latest commit we've added working support to show avatars on the 'main' branch (thanks to @armdupe for suggesting a simple solution to that problem), the performance when loading these 'main' branch avatars is however subpar, not sure if that can be entirely remedied.
And secondly and more importantly we've switched from accessing the GitHub API directly to using a cached version hosted at nutscript.xyz. The blocklist, and extra name data, avatars are all fetched and managed in the cached version to give us only one query to make in gmod (reducing the queries on x86-64 from 3 (on my local version) down to 1).
This change however could be changed once again as I believe we have an option to keep it all on the repo using GitHub actions and github-pages, we could possibly set up an action to run every time something is committed to main that queries the contributors list, avatars, etc... and puts them up on a GitHub page, this could be better than using cloudflare to host our cache. We'll see.
With the latest commit we've fixed the poor performance (and huge memory usage) on 'main' by removing the dependency on DHTML all together, now it will download and cache avatars directly to your data folder (only a couple MB's in size total, don't worry) and refresh them any time you join a new server and look at the credits for the first time. Then it will use those .png's to create materials which can be rendered very quickly.
Also, I've coded up a github-action to cache the contributors data & avatars to a repo instead of relying on cloudflare's cdn to deliver the data to users we'll just have them fetch it from the repo. Although this does require more work than just merging this commit. We need to set up a new branch on the main repo here, put in the workflow, and add a secret repo variable.