postgres-operator
postgres-operator copied to clipboard
[UI] Support Azure and GCP backups with UI
Please, answer some short questions which should help us to understand your problem / question better?
- Which image of the operator are you using? e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.6.3
- Where do you run it - cloud or metal? Kubernetes or OpenShift? Azure
- Are you running Postgres Operator in production? yes
- Type of issue? feature request
At the moment, postgres-operator-ui appears to be hard coded to only work with AWS S3 buckets. It will not work with either GCP or Azure, both of which are supported by Spilo and WAL-G. GCP is supported by the postgres-operator, and that still doesn't work either.
It would be nice to have feature parity in the UI for both GCP and Azure.
As we are not using GCP or Azure we are relying on the community to provide the support for both these platforms. Any help is appreciated.
Could https://github.com/zalando/postgres-operator/issues/937#issuecomment-752280357 be the necessary fix?
any updates on this one? Got same issue with scaleway.
I've been trying to get this working myself with Azure blob storage; however, I've ran into the same issue as described in https://github.com/zalando/postgres-operator/issues/937#issuecomment-752280357 ( the "no snapshots found" issue). I was able to get it to that point by:
- implementing / configuring S3Proxy ( https://github.com/gaul/s3proxy ) in our cluster and pointing the operator-ui towards it. S3Proxy config here:
- name: LOG_LEVEL
value: debug
- name: S3PROXY_ENDPOINT
value: http://0.0.0.0:80
- name: S3PROXY_AUTHORIZATION
value: none
- name: S3PROXY_IDENTITY
value: local-identity
- name: S3PROXY_CREDENTIAL
value: local-credential
- name: JCLOUDS_AZUREBLOB_AUTH
value: azureKey
- name: JCLOUDS_PROVIDER
value: azureblob
- name: JCLOUDS_IDENTITY
valueFrom:
secretKeyRef:
name: your-secrets
key: AZURE_STORAGE_ACCOUNT
- name: JCLOUDS_CREDENTIAL
valueFrom:
secretKeyRef:
name: your-secrets
key: AZURE_STORAGE_ACCESS_KEY
- name: JCLOUDS_ENDPOINT
value: https://your-storage-acct-id.blob.core.windows.net
- configuring our operator to match the wal/basebackup path that the ui is looking for
WALG_AZ_PREFIX: "azure://your-container-name/$(SCOPE)/wal/$(PGVERSION)"
- configuring the actual operator-ui as follows (these are the only ones I added to the default):
- name: WALE_S3_ENDPOINT
value: http+path://s3proxy-service #<---name of the service for s3proxy
- name: AWS_ENDPOINT
value: http://s3proxy-service
- name: SPILO_S3_BACKUP_PREFIX
value: ""
- name: AWS_ACCESS_KEY_ID
value: none
- name: AWS_SECRET_ACCESS_KEY
value: none
- name: SPILO_S3_BACKUP_BUCKET
value: your-container-name
- name: USE_AWS_INSTANCE_PROFILE
value: "false"
I don't see any errors on the operator-ui side, and it will load the clusters and the "base" snapshot section, but it's always empty. I can see all of the requests coming through to the s3proxy as well, and as far as I can tell it's returning the data. I really hope that they at least get it fixed in the s3 version as I think this should work if that issue was fixed. If I get time I'll try that workaround mentioned earlier and see if it works. I can't spend much more time on it at the moment though so it may be a minute.
Thanks all!
I've tried to run it with Cloudflare R2, after some fixing same thing happends as with Azure blob storage, no snapshots found, I've got no more time for debugging unfortunetelly Required fixes were:
- set
USE_AWS_INSTANCE_PROFILEenv to false - implement that bugfix: https://github.com/boto/boto/pull/3911
- force boto lib into using sigv4 as required by Cloudflare API by adding this piece of code on top of
sigv4_check_applyinwal_e/blobstore/s3/s3_util.py
if not boto.config.has_option('s3', 'use-sigv4'):
if not boto.config.has_section('s3'):
boto.config.add_section('s3')
boto.config.set('s3', 'use-sigv4', 'True')