manim icon indicating copy to clipboard operation
manim copied to clipboard

Adding a Manim Showcase for Community Work

Open WampyCakes opened this issue 3 years ago • 29 comments

Over the last few weeks, I have been working on a showcase for Manim.

Use Cases

  • Enables new users to see what Manim is capable of and used for
  • Gives existing users a place to show off what they have made in Manim
  • Creates a centralized place where teachers, learners, and Manim users can search for (hopefully) high quality videos explaining a certain topic
  • In many cases, it can help Manim users. If source is provided (optional), it will be linked along with the uploaded video. This enables anyone to see how a certain animation can be achieved

Current State of the Showcase

The Manim Showcase is fully functional as of now. An unofficial version can be viewed here. There is currently one sample video supplied by a community member to show how it looks. To be clear, this is not the finished end goal. It is just about ready to go live with the minimum features required to do so. There are a few other big features missing that can and are planned to be implemented later such as searching videos.

Features

These are the main points of the Showcase to provide an insight into how it works. Much time has been invested into smaller parts of the website as well outside of these main points to create a consistent and smooth experience.

  • Lists all approved videos from a JSON file. Entries in the JSON file are read as needed using Lazy.js. Entries in the file have also been stringified without tabs and as an array to remove the need for repetitive key values. My testing shows this reduces the size of the JSON file by roughly 90%. In other words, it will allow almost double the amount of videos to be included in one file and reduce page load time. I think this should be able to accomodate several hundred thousands videos before hitting the GitHub file size limit of 100 MB. If Lazy.js does not work as well as expected once we have thousands of videos on there, it may be necessary to move to a database or some other kind of storage. However, I do not foresee any need to move away from the current approach until wellll into the future.
  • Has a functioning page system that enables once a certain number of videos are uploaded (currently set to 10 per page, very easy to change)
  • Utilizes a form to submit videos (protected by an unobtrusive captcha)
  • Is completely free (hosted on GitHub pages, small backend for form handling on a Cloudflare Worker - 100K free form submissions per day)
  • Has an approval process in place to prevent someone from spamming their video straight onto the website or uploading inappropriate/irrelevant videos. Approving a video is as easy as adding a label to an issue and closing it.
  • Once a video has been submitted or approved, it can be edited by repository maintainers

Moving Forward

If this seems generally acceptable to the community, the next step would be to transfer it to be an official part of the ManimCommunity organization. Here's what this will entail:

  • It will need added to the manim.community domain @behackl
  • I will need to DM you @eulertour regarding a token for the backend or what route that should go. I will explain later.
  • I will put the code on a fresh repository (because of numerous variations of testing, the current repo is extraordinarly messy with branches, deleted issues, actions, etc.)
  • I will then be able to change some parameters in the code and GitHub workflows for the new branch, domain, etc.
  • We need a few other people who would be willing to help approve videos. This does not need to be a big time commitment. For one, the approval process is extremely short and will be detailed below. Secondly, there is no one dictating that there can never be a slight backlog of videos to be approved (if this even happens). @naveen521kk and (I can't mention him on here) Myssa, you both showed interest in creating a showcase before. Would either of you be willing to help with this process?
  • After all is settled, it should probably be linked to from the top of the docs Example Gallery. Additionally, an announcement on Discord and/or Reddit alerting people that they can submit all their work they've already done would be great!

Approval Process

Here is a sample of what a submission looks like. I envision the approval process to look like this:

  • Confirm that the video is created in Manim and appropriate. I do not plan on watching all of what is submitted. That would be insane.
  • Check the values such as description, tags, etc. and determine if they are acceptable. For example, if it is a calculus related video, Calculus should probably be one of the subfields. Titles shouldn't be clickbaity; they should be descriptive of subject matter. This will come in later when searching, sorting, and filtering videos is hopefully added.
  • If everything looks good, add the "submission approved" label and close the issue. This will trigger an automatic process that takes care of the rest.
  • If everything is NOT good, there are a few options. If the video has a snowball's chance in hell of being accepted, add the "submission rejected" label and close the issue. If it is blatantly inappropriate, delete the issue or request that it be deleted by someone with permission to do so. If it only needs a slight tweak in the information provided, go ahead and do it if it is an obvious change. Then approve the submission. If changing the title or author values, it should also be updated in the name of the issue. If a change is needed but what it should be changed to is not obvious, add the label "changes requested." This signals that the person who submitted the video must provide the necessary changes (what needs changed should be stated in a comment). Whatever they comment to be the changed values, a repo maintainer would have to edit the original post and update the values. If after a sufficiently long time, no response is given, close the issue. It's their loss.

Questions to be addressed

  • @behackl Can you confirm that I did the meta and icon information correctly? I tried to closely resemble what you did here. Mine is located here. I can confirm that while the paths for the icons are different, the paths are correct. The icons are located here. I just copied the ones you had (ignore favicon2).
  • @naveen521kk or anyone else with knowledge of CDNs, is this appropriate? I created a folder called CDN here to house fonts and pictures. The fonts stuff only contains the FontAwesome icons necessary for the header (to minimize size and replicate the icons used on other parts of the Manim website). The pictures are used in the captcha. These are accessed through jsDelivr's GitHub CDN. Is it preferable to do this or just include these fonts and images in the website assets? It would seem this should be faster by reducing asset size (and increasing load speed of these assets? not sure). If this is good, should the same be done for website icons?
  • When first designing this showcase, I tried to think through all the information that should be necessary to submit a video. I decided on what I think is a good balance of information to give context, links to more, and fields, subfields, and tags to filter content by without being excessive. Until the other day, I never thought to include a field about what language the video is made in. For example, I believe I have been seeing a lot of German manim videos lately. It will no doubt be helpful to include a selection for language to filter for a certain language or exclude certain languages from results. The question is this: How should this be implemented? A dropdown of languages seems most feasible to me. Is anyone able to help with compiling a list languages for a dropdown? Not too sure what to do in this department and it seems like it is going to be the delaying factor in the Showcase going live.

What should not be discussed here

  • Stylistic nitpicking. This does not pertain to whether the showcase should be added. I agree that the styling of video titles is ugly. Ideas on how that can be improved are welcome, but not particularly relevant here.
  • Any other nitpicking that does not pertain whether the showcase should be added. While welcome at a later point, not important right now.

WampyCakes avatar Jan 13 '21 20:01 WampyCakes

Thank you so much for the work here!

I have some scattered thoughts, here in no particular order:

  1. Currently, providing source code is optional. I move that we "strongly suggest" each contribution to provide source code. Source code is extremely important since manim is both the rendered videos as well as the code that generates them.
  2. Who gets to approve videos? Is there also a process for approving source code (if provided)?
  3. Where are the videos hosted?
  4. You said "Once a video has been submitted or approved, it can be edited by repository maintainers". What does it mean to "edit" a video here?
  5. "I do not plan on watching all of what is submitted." This brings up a few things. First of all, moderating user content can become a huge time sink, if the community grows large. Since we are putting this under the ManimCommunity organization, we could be held accountable for sensitive content. For example, do we want to moderate profane language or adult imagery? If so, then the approval process should involve watching the actual video in full length. If not, we can issue a blanket statement that the community devs are not responsible for the content, or some other such language.

leotrs avatar Jan 14 '21 15:01 leotrs

Currently, providing source code is optional. I move that we "strongly suggest" each contribution to provide source code. Source code is extremely important since manim is both the rendered videos as well as the code that generates them.

I recommend you go click the link for the showcase and hit the upload a video button. It says, "While optional, we highly encourage including source code!" If you want it to be a bit stronger, I am open to italicizing "highly."

Who gets to approve videos?

Whoever we designate. Likely that would be anyone in the Manim organization as it utilizes GitHub issues for managing submissions; though, we should have a few people who have committed to spending at least a little time approving videos. This question does lead to an interesting scenario that is rather feasible. We could designate members of the community who seem trustworthy to approve videos in the future. If that requires rejiggering a bit to avoid giving them permission in the org, that is possible. It would also probably allow for this to become a community effort in maintaining by speeding up approvals and giving a sense of contribution.

Is there also a process for approving source code (if provided)?

Nope. No need. It simply links to either a bitbucket, github, or gitlab repo.

Where are the videos hosted?

Users have the choice to use YouTube or Vimeo. It's merely an embed. Saves us many different kinds of headaches :)

You said "Once a video has been submitted or approved, it can be edited by repository maintainers". What does it mean to "edit" a video here?

Take a look at this. Those JSON values are what makes up a submission. Editing that data before approving would update the submitted info (and therefore show the updated version on the website). The automated process reads the JSON straight out of the issue body to allow for this. If a video has already been approved and needs edited, the value(s) for that submission needing updated could be edited here. That is the centralized file where all approved submissions go (until this potentially needs revamped years in the future).

we can issue a blanket statement that the community devs are not responsible for the content, or some other such language.

This seems the most feasible solution. There is no way that anyone should be forced to watch hours and hours of videos for free. I made this showcase and want to see it used, but there is no way I would do that. I would say that it will probably be the rare case something shouldn't be approved, so likely fairly lax standards on video content. Just skipping through the video should be sufficient. Language would require watching the whole thing so, again, probably that blanket statement.

WampyCakes avatar Jan 14 '21 21:01 WampyCakes

Awesome work here! I'm in favor of a community showcase as both a way to demonstrate what Manim is capable of (and in the case of source code -- showing how it was written).

Here are some of my suggestions, questions, and responses to a few things you and @leotrs mentioned above:

Suggestions

  1. Add a field for which verison of Manim the source code/video is written for -- this way videos can easily be reproduced using the appropriate version of Manim. This would be important to know with more versions of manim in the future.
  2. Just like the language category. Maybe there should be a filter/tag for sensitive content (i.e. NSFW content/content with profanity)

Questions/Food for thought

How can the creator edit their fields after submission? Can they do this through the created issue? Do they reopen their issue, make changes, and close for reapproval? What if they wanted to take their video down? How can we allow them to do this through their issue? Does this currently require contacting the Moderators/Devs to do this, and if so, how can we avoid having this time sync of requiring Moderator/Devs to manually edit their fields?

Reponses

Currently, providing source code is optional. I move that we "strongly suggest" each contribution to provide source code. Source code is extremely important since manim is both the rendered videos as well as the code that generates them.

I recommend you go click the link for the showcase and hit the upload a video button. It says, "While optional, we highly encourage including source code!" If you want it to be a bit stronger, I am open to italicizing "highly."

I think it's fine as is, but italicize if you want/if you think it'll get more source code submission (which is helpful as @leotrs mentioned, manim is both the code and the video it produces).

Is there also a process for approving source code (if provided)?

Nope. No need. It simply links to either a bitbucket, github, or gitlab repo.

I wouldn't say no need as there's a potential security issue in not having a process for approving source code. It's unlikely that someone would try to harm the community with malicious code, but it's something to consider. If source code is provided, the specific commit in their repo should be scanned/reviewed before approval since it's possible a malicious user would later update their main/master branch after being approved. If we don't intend to vet the source code, we should at minimum issue a warning that the code hasn't been analyzed and the community should be cautious in running code they don't understand.

Who gets to approve videos?

Whoever we designate. Likely that would be anyone in the Manim organization as it utilizes GitHub issues for managing submissions; though, we should have a few people who have committed to spending at least a little time approving videos.

i.e. Community Dev/Org member with the Triage+ role

jsonvillanueva avatar Jan 15 '21 00:01 jsonvillanueva

Add a field for which verison of Manim the source code/video is written for

Good point! We can include a little info hover that tells them to run the manim version command to figure this out (and use a dropdown list of versions for consistency)

Just like the language category. Maybe there should be a filter/tag for sensitive content (i.e. NSFW content/content with profanity)

Not a big fan of this one. I generally think that outside of potential language, NSFW videos shouldn't be approved in the first place.

I also worry that we are going to reach a point where the submission form becomes so long it is a deterrent to uploading videos, particularly when someone is uploading a whole bunch of their old videos that aren't on the showcase.

How can the creator edit their fields after submission? Can they do this through the created issue? Do they reopen their issue, make changes, and close for reapproval? What if they wanted to take their video down? How can we allow them to do this through their issue? Does this currently require contacting the Moderators/Devs to do this, and if so, how can we avoid having this time sync of requiring Moderator/Devs to manually edit their fields?

This is a difficult one. Hopefully editing and deleting is a rare occurrence. Because the issue is always posted by the same (automated) user, they would not be able to edit or reopen the issue. I know, this sounds like a big limitation. But it's because of restraints on implementation. Originally, the plan was to in order to submit a video, people would have had to sign in using GitHub OAuth (in which case the issue could be made by them). But because this is a static website, that's uh, pretty difficult, which lead to changing course. Plus, this doesn't require you sign in (which I always find to be a bonus as I am hesitant to use "Sign in with..."). In truth, this would require creating a new issue for now to request such changes if the issue has been closed. Open to ideas on alternatives, but I expect this to be a circumstance that puts us between a rock and a hard place.

Is there also a process for approving source code (if provided)?

Nope. No need. It simply links to either a bitbucket, github, or gitlab repo.

I wouldn't say no need as there's a potential security issue in not having a process for approving source code. It's unlikely that someone would try to harm the community with malicious code, but it's something to consider. If source code is provided, the specific commit in their repo should be scanned/reviewed before approval since it's possible a malicious user would later update their main/master branch after being approved. If we don't intend to vet the source code, we should at minimum issue a warning that the code hasn't been analyzed and the community should be cautious in running code they don't understand.

The only thing that could be changed on their end is what is contained in the repo. They can't change the URL, so traveling to the link should always be safe. Regarding running the code itself, while it may seem harsh, this isn't really our problem. When searching for any code on the Internet and finding results on GitHub, discernment on whether something should be run is always necessary. If someone wants to run something they haven't looked at, our warning won't stop them. They'll just burn themselves 🤷‍♂️

If you wanted to run a virus scan on the code, this brings up a few difficulties.

  • VirusTotal and similar virus scans are slow, meaning this can't be run on the Cloudflare Worker backend (which has a max execution time of 10ms). We don't want to run this in the user's browser because then they can circumvent it.
  • If someone changes the code later, like you said, this won't catch it
  • Those virus scans can only detect so much. You could probably create malicious python code that goes undetected. If you were trying to write malicious code for this purpose, wouldn't you run it through those scans first?

This showcase is not meant to endorse any content or affiliate any of it with the ManimCommunity org. If we want the blanket statement suggested by @leotrs that talks about video content to include source, that's fine. But I see this as any other website on the Internet. Use discernment when browsing, just like anything else.

WampyCakes avatar Jan 15 '21 00:01 WampyCakes

Just like the language category. Maybe there should be a filter/tag for sensitive content (i.e. NSFW content/content with profanity)

Not a big fan of this one. I generally think that outside of potential language, NSFW videos shouldn't be approved in the first place.

I agree. I just wanted to say that this would require the approving dev to actually watch the entire video! :sweat_smile: So that should be part of the approval process.

leotrs avatar Jan 15 '21 14:01 leotrs

Also, I agree there is no need to check the code for security as long as all we show is the video (which is hosted elsewhere). If we add a link to the source code, then we can, again, issue a blanket statement saying that the ManimCommunity org has not approved the code etc.

leotrs avatar Jan 15 '21 14:01 leotrs

@leotrs We can just put a blanket statement at the bottom of the page, and, if you guys think it is absolutely necessary (I do not), we can create a modal that pops up the first time you visit the page with that statement. There is no feasible way to watch that many hours of content. I only meant that by skipping through the video or zooming your mouse along the YouTube timeline and looking at the popup, you could catch most NSFW imagery in videos, not language, which we should probably take a lax standard on (for language I mean) considering these are not official videos from the ManimCommunity org (obviously you cannot for sure catch everything that way). No matter what, I do not think watching every video is a good idea to require. I think the question is just what we want to do about it. Give one blanket statement (briefly checking what content is in the video before approving) and call it good? I think that may be sufficient. @jsonvillanueva Maybe this can become part of #935 enforcement procedures if someone reports a video? Checking content and making a decision? Or not. Just throwing it out there.

How can the creator edit their fields after submission? Can they do this through the created issue? Do they reopen their issue, make changes, and close for reapproval? What if they wanted to take their video down? How can we allow them to do this through their issue? Does this currently require contacting the Moderators/Devs to do this, and if so, how can we avoid having this time sync of requiring Moderator/Devs to manually edit their fields?

I think I have thought of a fairly good idea. We can put some instructions on the GitHub repo for the showcase that explain how to edit or delete their video. It could look something like this:

  1. Submitting a video requires giving your GitHub username (if they don't have one, this method will not work for them and they will have to contact someone asking for a change. But I imagine that most people who would do this have a GitHub account nowadays)
  2. They create a PR that edits the submissions.json file
  3. An automated check comments on the PR to tell us if the username of the person who made the PR matches the GitHub username of the person associated with the video and tells us exactly what was changed in the file (this is necessary because the JSON file is contained in 1 line to save space, which leads to difficult to read diffs).
  4. All we have to do is check the comment to see if it's good, and then click merge. That can trigger a GitHub action to update the corresponding closed issue for the submission (for record keeping in case the submissions.json file gets deleted or if the user deletes their video, we delete the issue too).

The one point of failure to this approach that I can think of is that the submissions.json file could be updated in between when the PR is created and merged. Am I correct that this would be a problem? If so, maybe we just make them do it through an issue with a certain label or something like that. If we are asking for their github username, we can check if it matches and automate it for the most part.

Truthfully, this entire thing would be best managed with its own dashboard and members with varying permission levels. But in the interest of staying open source, out in the open, free on a static site, and not requiring a user account to use, this is the next best thing I could come up with.

WampyCakes avatar Jan 15 '21 18:01 WampyCakes

Let's have a blanket statement somewhere in an about section or something like that. The modal is not necessary. I still think reviewers should be strongly encouraged to watch the video they are approving, if not required.

leotrs avatar Jan 15 '21 20:01 leotrs

I agree. I just wanted to say that this would require the approving dev to actually watch the entire video! 😅 So that should be part of the approval process.

Agreed. The approving dev would need to dedicate time to watching the full video to catch any (if we have any) violations. As much as I don't like the idea of using up their time for this, I very much don't like the idea of a showcase tied to the organization in name with potentially sensitive content being hosted -- unless there's a blanket statement that we haven't actually looked at the content and it isn't ours. That said, if the showcase is tied to the Org in the future --

@jsonvillanueva Maybe this can become part of #935 enforcement procedures if someone reports a video? Checking content and making a decision? Or not. Just throwing it out there.

-- it would become one of Manim's Online Spaces and protected by the Code of Conduct. So any reports on a video for violating the code of conduct would be liable by the enforcement procedure. Since the showcase is not yet hosted by the Org, we can wait to add this to #935 / the code of conduct -- but it'll likely require an update at the time this is finalized/hosted by ManimCommunity. At which point, it might be possible to approve the videos without requiring it -- so long as in the future, people watching the video know they can report it for any violations to the code of conduct.

I still think reviewers should be strongly encouraged to watch the video they are approving, if not required.

Definitely, should be encouraged if it's not required.

jsonvillanueva avatar Jan 15 '21 20:01 jsonvillanueva

Let's have a blanket statement somewhere in an about section or something like that.

In other words, at the bottom of the page since it's a SPA.

What's wrong with going that direction? A statement saying we haven't watched videos in their entirety and are not affiliated with or endorse any content (and whatever other words you want to include) in said videos. Coupled with reporting to the code of conduct enforcement team, that seems acceptable to me. I just don't want several weeks worth of work to be for naught just because no one wants to watch all of every video they approve. I think that the benefits of having a showcase, along with a statement and enforcement procedures, outweigh these downsides, especially when this would allow for a showcase that doesn't get a giant backlog of unapproved videos.

WampyCakes avatar Jan 15 '21 21:01 WampyCakes

... tied to the organization in name with potentially sensitive content being hosted -- unless there's...

What's wrong with going that direction?

I can't imagine any issue with this direction. This has my approval with the blanket statement/CoC/reporting system to prevent a giant backlog.

jsonvillanueva avatar Jan 16 '21 00:01 jsonvillanueva

I'm interested in what others have to say @behackl @eulertour @XorUnison @naveen521kk @huguesdevimeux @cobordism @Aathish04. As far as I can tell, the main issue of discussion right now is whether or not the procedure to approve a video to the showcase should involve actually watching the video (to check whether it contains sensitive content).

leotrs avatar Jan 18 '21 13:01 leotrs

Hello there ! I'm a huge fan of this showcase idea, but i have some questions/suggestions to make.

  • About the video review process. For my part, i will always be willing to reviews videos ! But i understand why this process is problematic. Also, i propose using three «reviewed states». The lowest is «submitted», and therefore hidden from the users. One a trusted reviewer accepted the title/subject/description etc, the video will be marked as «unreviewed», and shown to the user, but with a «unreviewed» mark on it. If the reviewer accepts the video entirely (meaning that he watched it entirely), the video tag changes to «reviewed» and the mark disappears. A problem with the currently suggested process is that long videos, or those on complicated subjects will be less reviewed, even though they might be the richest. With this, they will be able to be posted even if they are not fully approved. We could also add a rule removing the «unreviewed» state for tiny videos (>5 minutes for example). This will also solve the problem when a video uses a language that no reviewer speak. This process will also be useful concerning the following point.
  • Make more content formats available. Even though manim makes video, one could also want to share a tiny animation he made, as a gif or a 7s video that won't be published on youtube. Maybe add other ways to post videos, maybe directly on the showcase GitHub, if the video is small enough (i don't know about GitHub size limits and quotas). Also, maybe consider presenting whole playlist, video series or even channels. If someone makes a video series of 30 minutes episodes, is there really a point in showing every episode as a unique element in the showcase ?
  • Talking of video display, have you decided on a preset tag list or free string tags ?
  • I've seen that every video is put in one «submissions.json» file. If a lot of videos are being posted, this file might get big and make the page load long. Would this be possible to store every video in separated json files ? Also, it might be because I'm a former web developer, but all of this stuff storing engine would really look great in a SQL database. And tag filtering would be so much easy ! But this method needs a server, unless free and reliable sql hosting services can be found.
  • Then about the website, the captcha looks very weak, given there are few images available. And you can nearly spam the thing. Is there open source captcha-services ?
  • A last thing about the website is that it is not really responsive. I'm reporting because it is not a style-related issue, but rather an usability-related one. On my phone's browser, the video is squeezed to the left

Well, i think that's it ! Don't mind if you have any questions, either on Discord or GitHub !

MysaaJava avatar Jan 20 '21 21:01 MysaaJava

i propose using three «reviewed states». The lowest is «submitted», and therefore hidden from the users. One a trusted reviewer accepted the title/subject/description etc, the video will be marked as «unreviewed», and shown to the user, but with a «unreviewed» mark on it. If the reviewer accepts the video entirely (meaning that he watched it entirely), the video tag changes to «reviewed» and the mark disappears.

I think that this may be a good idea if I am understanding you correctly. Other peoples' input would be good here too.

Maybe add other ways to post videos, maybe directly on the showcase GitHub, if the video is small enough (i don't know about GitHub size limits and quotas).

I think that we should avoid hosting anyone's content. That would seem to invite more liability issues that @leotrs seems concerned with. Also, it would be a pain from a technical standpoint compared to just an embed.

Make more content formats available. Even though manim makes video, one could also want to share a tiny animation he made, as a gif or a 7s video that won't be published on youtube.

I would like to include a section for GIFs and imgs made using Manim in the future. Again, we probably would not host it ourselves so there would probably be a requirement to host it on something like imgur and give us an embed link. Regarding short videos like that, it's probably more suitable they just make it into a GIF and upload it to that section in the future.

Also, maybe consider presenting whole playlist, video series or even channels. If someone makes a video series of 30 minutes episodes, is there really a point in showing every episode as a unique element in the showcase ?

I had this same thought. I am really not sure how to handle this. On one hand, I think they should all be separate because people will likely want to search for a specific part. For example, 3b1b's essence of calculus series contains videos on a lot of different topics. Being able to search for one or two relevant videos for specifically what you want is important. On the other hand, we may want to provide some sort of connection between these different videos. Not sure what form that would take. Any ideas?

Talking of video display, have you decided on a preset tag list or free string tags ?

I don't understand the question.

I've seen that every video is put in one «submissions.json» file. If a lot of videos are being posted, this file might get big and make the page load long. Would this be possible to store every video in separated json files ? Also, it might be because I'm a former web developer, but all of this stuff storing engine would really look great in a SQL database. And tag filtering would be so much easy ! But this method needs a server, unless free and reliable sql hosting services can be found.

I completely agree about a database. In an effort to keep this free, I avoided going that route. In the future, it may be necessary to migrate. It's using Lazy.js (I suggest you read this page to understand why) to load from the JSON file, and it will be used for filtering too in the future. The relevant code is here. It should be able to handle a pretty large number of submissions in the file for now, but yes, a migration will need to happen sometime I imagine. If someone is willing to pay for a database, then that can be used 🤷‍♂️

Then about the website, the captcha looks very weak, given there are few images available. And you can nearly spam the thing. Is there open source captcha-services ?

It is an open source captcha. It's called RVerify.js. The idea of the captcha is that most bots cannot slide an element, which is supposed to make it effective. If you fail the captcha, it goes on to the next image. What I did not realize until now is that it does not pick a new rotation angle on failure. I believe I can change it fairly easily so that it does. That should prevent spamming it since it would be pretty difficult to guess the correct angle randomly with only one guess (within a small tolerance which I can also adjust). I think the number of pictures is kinda irrelevant to its strength? More can always be added. Additionally, we could set a max-attempts on the captcha and store it as a cookie to expire after X amount of time. We could always include honeypots, but unless we want to make the honeypot visible to users, it may not be as effective as some bots detect for CSS hiding the honeypot.

I am 100% against using captcha services like Google's recaptcha. Before deciding on RVerify, I looked at a lot of options, and I did not like any of them as much as RVerify.

A last thing about the website is that it is not really responsive. I'm reporting because it is not a style-related issue, but rather an usability-related one. On my phone's browser, the video is squeezed to the left

It's not great on mobile, but I find it to be usable on my device. Right now the responsiveness it does have is a result of using Bootstrap, which I hadn't extensively used before. If/when this is published, would you mind making a PR to fix responsiveness issues you have?

WampyCakes avatar Jan 20 '21 22:01 WampyCakes

Not that anyone should feel pressured to pay for hosting for this (as I can't), but if this was hosted on a server, we could actually make this into a really nice showcase. It would eliminate the need for the Cloudflare Worker as there could be a php backend. We could also use a database for storing the data. I think we would still be able to deploy from GitHub after editing website source code. I think that it could also improve the submission approval process and everything. We could add reactions/a like button for videos on there. Lack of server (tho the pro of being free) is the real limiting factor on how amazing this showcase could be.

WampyCakes avatar Jan 20 '21 22:01 WampyCakes

Maybe add other ways to post videos, maybe directly on the showcase GitHub, if the video is small enough (i don't know about GitHub size limits and quotas).

I think that we should avoid hosting anyone's content. That would seem to invite more liability issues that @leotrs seems concerned with. Also, it would be a pain from a technical standpoint compared to just an embed.

It would only be for fully-reviewed content (which is not hard for images/gifs). And isn't it possible with pull requests ? You don't have to automate every part.

Make more content formats available. Even though manim makes video, one could also want to share a tiny animation he made, as a gif or a 7s video that won't be published on youtube.

I would like to include a section for GIFs and imgs made using Manim in the future. Again, we probably would not host it ourselves so there would probably be a requirement to host it on something like imgur and give us an embed link. Regarding short videos like that, it's probably more suitable they just make it into a GIF and upload it to that section in the future.

Also, maybe consider presenting whole playlist, video series or even channels. If someone makes a video series of 30 minutes episodes, is there really a point in showing every episode as a unique element in the showcase ?

I had this same thought. I am really not sure how to handle this. On one hand, I think they should all be separate because people will likely want to search for a specific part. For example, 3b1b's essence of calculus series contains videos on a lot of different topics. Being able to search for one or two relevant videos for specifically what you want is important. On the other hand, we may want to provide some sort of connection between these different videos. Not sure what form that would take. Any ideas?

Do both ^^ something that recognizes elements from playlists.

Talking of video display, have you decided on a preset tag list or free string tags ?

I don't understand the question.

By tag i meant things like «math» or «quantum mechanics» or «algorithm visualisation» or «manim capabilities example» or «educational video», things like this. Will the user be able to put any string as a tag, like on youtube videos, or will there be a list of tags made before, like on StackOverFlow.

I've seen that every video is put in one «submissions.json» file. If a lot of videos are being posted, this file might get big and make the page load long. Would this be possible to store every video in separated json files ? Also, it might be because I'm a former web developer, but all of this stuff storing engine would really look great in a SQL database. And tag filtering would be so much easy ! But this method needs a server, unless free and reliable sql hosting services can be found.

I completely agree about a database. In an effort to keep this free, I avoided going that route. In the future, it may be necessary to migrate. It's using Lazy.js (I suggest you read this page to understand why) to load from the JSON file, and it will be used for filtering too in the future. The relevant code is here. It should be able to handle a pretty large number of submissions in the file for now, but yes, a migration will need to happen sometime I imagine. If someone is willing to pay for a database, then that can be used man_shrugging

I wanted to discuss the possibility of having a server for manim community, but i didn't think it was the right place. For the time being, we can write everything on GitHub pages, and wait for a server to appear. I might discuss this on discord or in a separate issue later.

Then about the website, the captcha looks very weak, given there are few images available. And you can nearly spam the thing. Is there open source captcha-services ?

It is an open source captcha. It's called RVerify.js. The idea of the captcha is that most bots cannot slide an element, which is supposed to make it effective. If you fail the captcha, it goes on to the next image. What I did not realize until now is that it does not pick a new rotation angle on failure. I believe I can change it fairly easily so that it does. That should prevent spamming it since it would be pretty difficult to guess the correct angle randomly with only one guess (within a small tolerance which I can also adjust). I think the number of pictures is kinda irrelevant to its strength? More can always be added. Additionally, we could set a max-attempts on the captcha and store it as a cookie to expire after X amount of time. We could always include honeypots, but unless we want to make the honeypot visible to users, it may not be as effective as some bots detect for CSS hiding the honeypot.

I am 100% against using captcha services like Google's recaptcha. Before deciding on RVerify, I looked at a lot of options, and I did not like any of them as much as RVerify.

Agree with that, but i feel that this captcha is useless, as it can easily be cracked. few images means that you can make a bot that recognizes them all. And they are available in the repo source code. And storing failed attempts as cookies is inefficient as the server has no power on cookies, and a hacker could just reset his cookies all the time. I don't thing there is a good solution for now, maybe let it as it is.

A last thing about the website is that it is not really responsive. I'm reporting because it is not a style-related issue, but rather an usability-related one. On my phone's browser, the video is squeezed to the left

It's not great on mobile, but I find it to be usable on my device. Right now the responsiveness it does have is a result of using Bootstrap, which I hadn't extensively used before. If/when this is published, would you mind making a PR to fix responsiveness issues you have?

I would have loved to, but i'm completly unfamiliar with the html maker you used (this thing parsing .vue files). But i'll make an issue about that.

OK ! time to sleep now !

MysaaJava avatar Jan 20 '21 23:01 MysaaJava

It would only be for fully-reviewed content (which is not hard for images/gifs). And isn't it possible with pull requests ? You don't have to automate every part.

I'm just generally not a fan of storing user content as this could lead to its own storage issues and related issues. And I do kinda disagree about automation as any nice UX would not require the user to submit a PR for that.

By tag i meant things like «math» or «quantum mechanics» or «algorithm visualisation» or «manim capabilities example» or «educational video», things like this. Will the user be able to put any string as a tag, like on youtube videos, or will there be a list of tags made before, like on StackOverFlow.

Yes, the user can put anything. That's something that should be checked before approving a submission.

I wanted to discuss the possibility of having a server for manim community, but i didn't think it was the right place. For the time being, we can write everything on GitHub pages, and wait for a server to appear. I might discuss this on discord or in a separate issue later.

Agreed. A server would be very nice.

Agree with that, but i feel that this captcha is useless, as it can easily be cracked. few images means that you can make a bot that recognizes them all. And they are available in the repo source code. And storing failed attempts as cookies is inefficient as the server has no power on cookies, and a hacker could just reset his cookies all the time. I don't thing there is a good solution for now, maybe let it as it is.

I'm not saying it's crackproof, but I think it's a more effective solution than you think. From what I've read, it's difficult to make a bot slide anything. You'd still have to figure out what angle is correct by viewing the output of sliding, and in order to do all of this you'd need to really invest some time into trying to spam this form. Aren't most bots just crawlers all over the web? This would take a concerted attack to circumvent. I encourage you to try to make a bot to circumvent it. Just don't spam the submission form if you succeed and don't release the source code. I think it'll turn into a harder task than most spammers would be willing to do just to defeat 1 form type on an open source project. It's not like a major company or common captcha.

I only mentioned cookies because if you used JS to just keep track of failed attempts, a refresh is all that is needed to continue trying. Even if an errant form submission passes through, I think a bot would have a low success rate (no solution is foolproof, we only need to mitigate).

I would have loved to, but i'm completly unfamiliar with the html maker you used (this thing parsing .vue files). But i'll make an issue about that.

You could strip out Vue for testing and only use the HTML and CSS to correct it. If you made changes that fixed responsiveness, I can merge it in.

WampyCakes avatar Jan 20 '21 23:01 WampyCakes

Regarding languages, would this be a good implementation? We could add a language field to the submission form that asks what language the video is in. Default to English and use the values listed here. That way, responses for language are static and will be consistent. My real question is this: is that a good list of languages or is there a better list? I imagine we want to have languages listed in their local/native spellings and formattings as much as we can.

WampyCakes avatar Jan 21 '21 00:01 WampyCakes

+1 for asking for language

+1 for NOT using a server just now. If this blows up then we can move to a server.

+1 for using tags for different review states. (Maybe consider "unverified"/"verified" instead of "unreviewed")

leotrs avatar Jan 21 '21 12:01 leotrs

Small update:

  • Meta tags have been confirmed to look good.
  • I have submitted a PR to RVerify to strengthen the captcha against spam attacks here. If we end up getting a server, we can implement a nonobtrusive captcha through php instead to make it stronger (I'm fairly against premade services for this)
  • I have added a dropdown list of languages. Haven't pushed changes yet, but languages should be about good to go!

WampyCakes avatar Jan 23 '21 22:01 WampyCakes

It seems weird that we rely on the PR to get the date but if we can't easily determine it in an automated way that's ok.

As for the reviewing, I'm in favor of having only a cursory review process, a prominent disclaimer regarding the videos and code on the site, and an easy way to flag videos that don't belong.

I'd recommend not using a server or any other type of external storage if we can avoid it. If we can't avoid it I'd recommend using a serverless solution so that maintenance is minimal. If we can't avoid running a server I'd recommend keeping the code there very minimal and dead simple. We don't have the resources to do real DevOps right now.

I think the idea of having people provide a GitHub username is fine, and doing so in an automated way with OAuth would be even better.

Other than that this looks good to me, but after is put up for real it will require someone paying attention for a while to make sure it doesn't break.

eulertour avatar Jan 24 '21 08:01 eulertour

It seems weird that we rely on the PR to get the date but if we can't easily determine it in an automated way that's ok.

It's not too weird. When the user completes the form, it sends the data they submitted as JSON to the Cloudflare Worker who makes the issue on GitHub. The date is provided by the showcase website when the form is submitted to say when it was submitted. If you mean that you think it should just pull the date of the issue, yeah, maybe. But right now it can parse the JSON straight out of the issue body so it's probably more work than it's worth (also I standardized the date on the website and I am unsure of what GitHub shows as the date. I assume they may localize it for each user). Also, I may need to have it change the date on approval. Reason is that the newest entries should be on the top of the page and if there is a backlog the most recently submitted video may not appear at the top.

I think the idea of having people provide a GitHub username is fine, and doing so in an automated way with OAuth would be even better.

It likely won't use OAuth simply because it's a static page. Yet another thing that a server would solve, though I know it's not very practical right now.

Other than that this looks good to me, but after is put up for real it will require someone paying attention for a while to make sure it doesn't break.

I will definitely be watching it for a while. I also have set up some error logging on Cloudflare. So between errors logged there and on GitHub workflows, issues should be able to be pinpointed for the most part.

WampyCakes avatar Jan 24 '21 17:01 WampyCakes

Moving Forward

@leotrs @MysaaJava @jsonvillanueva @eulertour @behackl In the interest of moving forward, I have compiled a list of what has been brought up above as necessary changes.

For liability purposes:

  • Adding a blanket statement at the bottom of the page regarding video content and linked code
  • Adding three stages (@MysaaJava's idea): submitted, approved/unreviewed, reviewed. Hovering over unreviewed/reviewed tag on a video explains the meaning of being reviewed. @leotrs, I think unreviewed/reviewed more closely conveys the meaning than unverified/verified does
  • Report button to Code of Conduct Enforcement Team

Feature improvements:

  • Nearly automated submission edits. Will require providing GitHub handle on video submission form (and it will not work for those without a GH account) and will only require an approval by a dev. Described here
  • Regarding playlist management, I cannot think of a good way to group them together. Unlike on YouTube where you can manage playlists under your account, we would have to have some linking piece of information among videos. I think playlists would be difficult to implement, but also a pain to maintain as a feature. The reason for this is that we do want videos to remain separate (see above discussion if desired). Additionally, I think that it would lead to a lot more edits (like through GitHub) to approve by the ManimCommunity devs. I think the best thing to do here is just make it so clicking the user (the name after "By" on a video) opens a modal with their other videos they've submitted (and still have a link to the credit--either a YT channel or repo profile). That's the best I can come up with
  • Add Manim version to video submission form

I think that that is all that is necessary to bring up again that was discussed above. I will also be working on a few other things before going live like implementing a language dropdown and other such stuff (which I don't see as something that will need much further discussion). Does this seem agreeable by everyone to move forward and get this thing live?

WampyCakes avatar Jan 27 '21 03:01 WampyCakes

I'm fine with going live with it after those changes are made

eulertour avatar Jan 30 '21 03:01 eulertour

I appreciate the work you have put in this; and thank you for summarizing the current status!

We have talked about this a while ago, and essentially I haven't really changed my position. I do think that having a showcase page is an excellent idea -- but I also think that the application and in particular the submission process feels a bit over-engineered right now. In a first iteration, a static page where the submission data is read from the JSON which is modified directly via PRs by users [especially if a GitHub account is needed anyways] would have been just fine.

(Maybe I mainly feel that setting up a custom form-based submission workflow solves a problem that I'm not sure we really would have ran into; and in general: the fewer external services we depend on, the fewer things can break down at some point.)

In any case: I'm fine with moving forwards with your suggested implementation (including the changes that you mentioned). However, I'd leave further features (especially complicated ones, like the playlist management you mention above) out for now, and first observe in how far people will be using the platform.

behackl avatar Jan 30 '21 03:01 behackl

100% agreed with not overengineering things - I'd much rather iterate quickly and solve problems that we have, not problems we think we will have.

leotrs avatar Jan 31 '21 18:01 leotrs

@leotrs I'm not going to implement the editing process yet, but I'll add the field for a github handle for the future. I'll do the other changes soon and then I think we're good to go.

WampyCakes avatar Jan 31 '21 18:01 WampyCakes

We discussed this during the meeting and decided that the best approach is to proceed with it first and address any potential problems as they come up, so if you have the code available to deploy you're good to go.

eulertour avatar Feb 07 '21 05:02 eulertour

@eulertour I have a few last things I need to finish before it can go live. Very close, but I just have to get around to it.

WampyCakes avatar Feb 07 '21 06:02 WampyCakes