haikuporter
haikuporter copied to clipboard
Investigate adding S3 support to haikuporter buildmaster
haikuporter buildmaster works out of locally attached storage. There is a substantial cost benefit if it would function out of S3 buckets.
(1TiB stored, 1TiB of egress)
- 1 TiB of direct attached storage at Digital Ocean or Vultr $100.00 / Month
- 1 TiB of s3 storage @ Digital Ocean $25.48 / Month
- 1 TiB of s3 storage @ wasabi $4.99 / Month
- 1 TiB of nearline s3 storage @ gcp $108.28 / Month (egress costs expensive here)
- 1 TiB of standard s3 storage @ gcp $128.28 / Month (egress costs expensive here)
- 1 TiB of standard s3 storage @ azure $22.92 / Month
This change would also reduce the complexity of the infrastructure and allow us to cut down on the workload placement restrictions (a bunch of stuff "works" out of the haikuporter volumes, the volumes can only be mounted to one k8s node at a time (RWO)... thus we have to "group" anything which uses haikuporter volumes on the same physical cluster node) These things limit our ability to do zero-downtime upgrades.
As for the "how". We need an abstraction layer added to haikuporter buildmaster for various storage targets. "local,s3,etc" I did something similar (in a greatly simplified way) for the mongodb reporter.