envoy
envoy copied to clipboard
Apt Repository Only Contains Releases up to 1.32
Good day all,
My apologies if this is documented somewhere I missed, but while I see that Debian packages are attached to the releases for 1.33 and 1.34, they seem to be missing on the apt mirror: https://apt.envoyproxy.io. Is this intentional or is there an update cadence to the apt repo I am not aware of? Thanks.
@nnewton unfortunately this is correct - there is a ci issue with diskspace that i didnt figure out a solution for yet
@phlax thanks for the response! My company is rolling out Envoy to front Redis Cluster + I maybe pushing this on to drupal.org, similar configuration. I know it doesn't always work like this, but is there anything I can do to help the issue along? (either monetarily with the CI system or having an engineer look at something).
hi yeah - help would be hugely appreciated - but i think i probably will need to help even in that case - the repo is here https://github.com/envoyproxy/apt
the problem is essentially that it extracts too much data with all the deb files and the repo snapshots etc that come with aptly (at least iiuc)
i would be happy to work with someone to address - probs getting some failing ci to resolve is a good first start
i have various ideas around what we could do - probably the most realistic of them is to build the repo in github workflows and one way/other - allow netlify to fetch or even just proxy whatever github created
Hello @phlax,
I work for @nnewton and I have some time I can spend on working towards a resolution here.
As I understand the issue, the workflow in update.yml pulls down the build cache + debs, runs a build, and if the build is against a commit on the main branch, it pushes the build result up to google cloud storage, where it sits until some process running in netlify decides to pull it down.
I created a local mirror of your repo (sorry), and noticed that its 23GB. That seems to be the brunt of the issue with GH runners only having 25-30GB of usable disk space. It seems to me that the simplest solution is to just stop pulling in debs with gsutil rsync, and only pull in what is stricly necessary for pull requests to ensure that builds are passing.
When running the update workflow directly on the main branch, we can use GCP's 'Cloud Storage FUSE' to mount the envoy-apt-cache bucket into the runner's working directory so that all local changes end up in GCS without consuming as much disk space.
Should I open a separate issue in the apt repository to continue the discussion?
i think this is more/less correct - altho the current issue with publishing is netlify - i think github runners can probably work
using GCP - esp with fuse sounds like a good option - probably we can have netlify proxy for the final product also
the drawback is that netlify allows pr previews out of the box - whereas its much harder to publish prs safely from a github runner
ill try and find a bit of time this week to bring myself back up to speed (i think things may be a bit in flux as i started trying to remedy previously)
@phlax I see the new green builds after adding the disk-space action. That's good to see.
I'm a bit confused about what the present issue is with netlify as the old failed builds don't actually show the build's logs. Is it just a disk space issue like what we were seeing on the update job workflow? I know next to nothing about netlify.
Is there anything I can do now to help ease your workload on this issue? Would it help if I were to create a draft PR to install and configure GCS Fuse? That disk-space action is helpful for now but its only a matter of time before we run out of usable disk space again.
Would it help if I were to create a draft PR to install and configure GCS Fuse?
so, as mentioned above, i think getting this working in a more general way is a good first step - i think it doesnt solve the apt issue by itself but it does provide the tools to do it (hopefully)
i have a WIP PR here to add the tool https://github.com/envoyproxy/toolshed/pull/2654 - its mostly working - so ill land that and test it with envoy more generally - once that is working we can look again at the apt repo specifically
so @kindlehl im not sure how much gcsfuse will help - from initial testing
the main issue im hitting is that it requires a directory-like structure - this then takes an unfeasibly long time to populate
for ref - our current ci (in envoy) which up/downloads to/from a tarball takes 2-3 mins max either way, whereas uploading the same content using gsutil rsync takes 2-3 hours - i can try with gcsfuse for the upload, but i highly suspect it will be even slower
i had a vague hope that i would be able to do a fast server-side copy - so once populated we could just do a sync with any changes - but that also takes hours rather than minutes so i think options are limited with that
it might work a bit better for the apt repo case - the objects are (more) immutable, and using gcs versioning may also be an option - but tbh i would be very surprised if this performs better than the tarball way that we currently use
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.