evaluate enabling git lfs on this repo
starting a discussion here as I think there's a warning from the weekly generation logs we've probably missed over time as the OpenAPI descriptions grew.
remote: warning: See http://git.io/iEPt8g for more information.
remote: warning: File openapi/beta/openapi.yaml is 50.57 MB; this is larger than GitHub's recommended maximum file size of 50.00 MB
remote: warning: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.
(when pushing OpenAPI descriptions)
Currently the metadata clone of the repo takes about 8 seconds on the weekly generation process. This is not a big deal big any measure. However I suspect that clone/fetch operations are much slower on slower connections. I'm personally alone on a dedicated fiber connection, but I recognize this is not the case for everyone.
some early LFS users report a 10X improvement (note: no need to use a separate command anymore, it's just the regular git clone command).
I don't think we should optimize something if it's not a problem, but I wanted to double check whether it is a problem or not for people who often work on this repo?
(interestingly native GitHub features like contributors or commits under insight never return, and I suspect it's related)
CC @zengin @peombwa @MIchaelMainer @irvinesunday
Thanks for raising this. I haven't observed any significant slowdown. That said, because it won't require a separate command, it shouldn't be a huge inconvenience to switch over. My vote would be on switching as the file size will only grow from this point on.
Update: we've run into storage space issues on the agents lately this repo is 1GB once cloned, and any given agent can hold multiple copies of it because it gets recycled. The docs repo is 1GB as well, but doesn't seem to have large files, only looots of commits.
If we enable this on the current repo:
- people would need to run
git lfs installafter they clone - ADO pipelines would need to be updated
Additionally we might want to rewrite history with a tool if we don't care about the sha1's to achieve maximum gains
re-opening this one since it wasn't implemented yet. It's just that github caught phraseology that it wasn't meant to be the case.