fluent-bit icon indicating copy to clipboard operation
fluent-bit copied to clipboard

Switch package releases from standalone packaging server to s3 release bucket.

Open niedbalski opened this issue 3 years ago • 6 comments

Bug Report

Releases should go into s3 releases bucket and replace the usage packaging server.

  • [x] Ensure package releases are correctly signed and published on s3 releases bucket
  • [x] Synchronise previous releases into the s3 release bucket via rsync.
  • [x] Ensure old and new signing gpg keys are updated and documented.
  • [ ] Update the fluent-bit-infra to point the packages.fluentbit.io domain to the s3 bucket.
  • [ ] Enable stats for the release buckets. (Either via cloudflare or on s3 directly).
  • [ ] Send download stats to Grafana dashboards.

To Reproduce

N/A

Additional context

niedbalski avatar Mar 17 '22 12:03 niedbalski

Need to ensure we cover repo metadata signing in packaging/update_repos.sh as well as using SHA256 digests. This seems to be required for FIPS mode on RHEL.

https://github.com/fluent/fluent-bit/issues/3617 https://github.com/fluent/fluent-bit/issues/3618

Both should be testable in the smoke tests.

patrick-stephens avatar Mar 18 '22 08:03 patrick-stephens

  • [ ] Add a smoke test run to verify from the new domain URL for simple installation checks.

patrick-stephens avatar Apr 06 '22 11:04 patrick-stephens

New CNAME up: https://github.com/fluent/fluent-bit-infra/blob/91feb5d61448fdf74484b593089731df3f3f2e6e/terraform/domains.tf#L179 It is accessible at packages-test.fluentbit.io.s3.amazonaws.com

patrick-stephens avatar May 03 '22 08:05 patrick-stephens

@patrick-stephens can we kick this off and RIP the server?

niedbalski avatar Jun 09 '22 11:06 niedbalski

Not without Windows build and checksums, plus cert issues.

patrick-stephens avatar Jun 09 '22 11:06 patrick-stephens

Checksums should be done now, just waiting on #5599 to fully deprecate the Appveyor builds although we can transition without it.

patrick-stephens avatar Jun 21 '22 08:06 patrick-stephens

OTE now, getting per-object stats is tricky for S3 so we are using the S3 buckets as the source of truth but hosting on a server that replicates them with the relevant metrics then available.

patrick-stephens avatar Jan 18 '23 13:01 patrick-stephens