Grace
Grace copied to clipboard
Keeping forks on the same server is a security vulnerability and legal risk for the server host, and an operational risk for the maintainer of the fork.
Scott,
You've got some interesting ideas here, and it's nice to see someone tackling the UX issues that git has. That said, as someone who's been the technical co-founder of a number of companies (including Cloudability, which directly informs my thoughts here), my first impressions upon reading through the introductory documentation are as follows:
-
If I publish an open source project, and someone who wants to fork it does so on my server, then they have an easy way to forcibly increase my OpEx spend pretty arbitrarily. They can just make a fork, and start adding multi-gigabyte files by the boatload to inflate my S3 bill. Of course, if I can ban someone from doing that on my server, then as a fork maintainer I face the risk that my fork could be shut down at a moment's notice by someone outside of my organization.
-
Similarly, the fork could be used to host illegal content by a bad actor. As the host of that content, it's entirely reasonable to expect that laws in many jurisdictions may make me liable for that. So, anyone hosting a server for the sake of hosting their OSS project may wind up with a de facto obligation to closely monitor everything being done by anyone forking their project.
-
A common use-case for me in forking repositories is simply to ensure that they continue to exist if the original developer decides to delete them. Requiring that the fork be on the same server is only a viable strategy if all the projects I rely on are using the Grace equivalent of Github. The moment one of them has their own server is the moment when I am either forced to come up with a way to continuously pull updates from one Grace server to another, or carry a risk that a dependency may simply disappear from under me. That adds to my operational risks.
I have other concerns, as well.
One you might have a strategy for addressing, and it might be helpful if you address it in the FAQ: I routinely make use of git-lfs
, and have a few repos that are simply too large to work with over an Internet connection -- I host a server for them in my home where I have robust, high-speed, local network access. Think 5.9GB .git
folder with a 2GB working directory. I also used to do game development, where large files being updated frequently is a common occurrence. For example, in one project I have a PhotoShop file that's 247MB. Having that much data get tossed around every time the artist hits save, if we happen to be working on related branches, could be pretty disruptive. Even if I have the bandwidth to handle it, the amount of time it would take would make the "instantly pick up [other] updates and automatically rebase" aspect would be gated on those transfers. For scale: In that repository, there are dozens of .psd
, .fbx
, and .max
files that are 10+MB. I suspect this is a case where my requirements are simply out of scope for what you're trying to achieve. Worth talking about in your FAQs either way, perhaps.
My largest other concern is one you've explicitly decided is out-of-scope (poor/intermittent network access), so I won't delve into that other than to point out that it's not as simple as "sometimes I'm on an airplane". In fact, I rarely fly. I am, however, routinely in areas where I'm on cellular Internet and access is marginal at best. Even if I can maintain a sufficiently stable, fast connection, cost could easily be a concern for me. Not that that should necessarily impact your thinking on this particular architectural decision. What might be worth considering, however, is the impact on my organization's productivity if the VCS server becomes unavailable. You can make it as fault tolerant as you want, but an operational error / backhoe error / etc would mean that my entire team is unable to continue development without losing the ability to at least incrementally snapshot their work and go back to previous iterations if needed. Local caching of saves/checkpoints might adequately address that concern, however so perhaps I'm overly worried there.
Relatedly, I will note, another common use-case for me is repositories that are only ever local. I don't know if that's a common occurrence or if I'm just being idiosyncratic with it. So it may very well be not worth addressing. I suspect though that if your thinking is driven by your experience at Github that you might not really have a view on how common such a use-case is, as it wouldn't come up in discussions with development teams about their professional usage. It could be, hypothetically that most developers do this and it simply never came up in conversation. Of course, I doubt it's most, but simply want to note that it's very hard to have confidence on how common such a use-case is.
All of that said, I appreciate that someone is pushing forward with bold ideas to try and address the substantial UX issues git has and wish you the best of luck. However things turn out for Grace, I look forward to seeing learnings from it driving the state of the art forward in the future.