charts
charts copied to clipboard
[bitnami/minio] 2024.1.18-debian-11-r1 cannot persiste data when pod restart, while 2023.5.18-debian-11-r2 works fine
Name and Version
bitnami/minio 13.2.1
What architecture are you using?
amd64
What steps will reproduce the bug?
- with below images in minio-values.yaml:
image:
registry: docker.io
repository: bitnami/minio
tag: 2024.1.18-debian-11-r1
clientImage:
registry: docker.io
repository: bitnami/minio-client
tag: 2024.1.18-debian-11-r1
# ignored lines
defaultBuckets: "test"
provisioning:
enabled: true
policies:
- name: test
statements:
- effect: "Allow"
actions: ["s3:*"]
resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
users:
- username: 'test'
password: 'testtest'
policies:
- test
- install minio:
helm upgrade --install minio minio-13.2.1.tgz -f minio-values.yaml -n minio
- run
mc alias set myminio https://<minio-ingress-host> test testtest
-
expecting to see
Addedmyminiosuccessfully. -
restart minio pod manually with
kubectl delete pod -l app.kubernetes.io/instance=minio -n minio -
wait for minio pod running, run:
mc alias set myminio https://<minio-ingress-host> test testtest
now error with mc: <ERROR> Unable to initialize new alias from the provided credentials. The Access Key Id you provided does not exist in our records.
- now change the image tags to:
image:
registry: docker.io
repository: bitnami/minio
tag: 2023.5.18-debian-11-r2
clientImage:
registry: docker.io
repository: bitnami/minio-client
tag: 2023.5.18-debian-11-r2
- run same steps 2-6, without error.
Are you using any custom parameters or values?
image:
registry: docker.io
repository: bitnami/minio
tag: 2024.1.18-debian-11-r1
clientImage:
registry: docker.io
repository: bitnami/minio-client
tag: 2024.1.18-debian-11-r1
mode: standalone
auth:
rootUser: admin
rootPassword: '{{ minio_admin_password }}'
defaultBuckets: "test"
provisioning:
enabled: true
policies:
- name: test
statements:
- effect: "Allow"
actions: ["s3:*"]
resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
users:
- username: 'test'
password: 'testtest'
policies:
- test
containerPorts:
api: 9000
console: 9001
apiIngress:
enabled: true
hostname: ' {{ minio_ingress_host }}'
path: "/"
servicePort: minio-api
ingressClassName: 'nginx'
tls: true
extraTls:
- secretName: tls-secret
hosts:
- ' {{ minio_ingress_host }}'
ingress:
enabled: true
hostname: ' {{ minio_ingress_host }}'
path: "/"
servicePort: minio-console
ingressClassName: 'nginx'
tls: true
persistence:
enabled: true
mountPath: /data
accessModes:
- ReadWriteOnce
size: '100Gi'
annotations: { }
existingClaim: ""
What is the expected behavior?
expecting minio persists data after pod restart
What do you see instead?
minio pod lost user credentials
Hi @robinliubin
Please correct me if I'm wrong but AFAIK aliases are stored on the local mc configuration, see:
- https://min.io/docs/minio/linux/reference/minio-mc/mc-alias-set.html
In other words, it's sth saved on the "client side" instead of the "server side".
We don't persist the client side config on MinIO containers, therefore it's normal to lose these aliases if the container/pod gets recreated.
@juan131 Thanks for helping. The issue we observed is not "client side" losing the alias, but the "server-side" lost provisioned user credentials. you can see in the values.yaml, provisioning is enabled on server-side.
however on image tag 2024.1.18-debian-11-r1, when pod is restarted, provisioned data is lost
while only changing image tag to 2023.5.18-debian-11-r2, when pod is restarted, provisioned data is persisted.
Hi @robinliubin
I was unable to reproduce the issue using the values.yaml below:
defaultBuckets: "test"
provisioning:
enabled: true
policies:
- name: test
statements:
- effect: "Allow"
actions: ["s3:*"]
resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
users:
- username: 'test'
password: 'testtest'
policies:
- test
These are the steps I followed:
- Install the chart & ensure provisioning worked as expected:
$ helm install minio oci://registry-1.docker.io/bitnamicharts/minio -f minio.yaml
NAME: minio
(...)
CHART NAME: minio
CHART VERSION: 13.3.4
APP VERSION: 2024.2.4
(...)
$ kubectl logs -l app.kubernetes.io/component=minio-provisioning -c minio
│ 127.0.0.1:9000 │ ✔ │
└────────────────┴────────┘
Restarted `provisioning` successfully in 503 milliseconds
Created policy `test` successfully.
Added user `test` successfully.
Attached Policies: [test]
To User: test
Enabled user `test` successfully.
End Minio provisioning
- Create a "minio-client" pod to run the
mc aliascommand:
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
- Delete MinIO pod manually and create again a "minio-client" pod to run the
mc aliascommand:
$ kubectl delete pod -l app.kubernetes.io/instance=minio
pod "minio-7fc546fdff-qqj2m" deleted
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
persistence:
enabled: true
mountPath: /data
accessModes:
- ReadWriteOnce
size: '100Gi'
annotations:
"helm.sh/resource-policy": keep
existingClaim: ""
in my test, persistence makes the diff, if this section is added, this issue is reproducible.
Hi @robinliubin
I'm also enabling persistence in my tests (it's enabled by default). Why did you change the default mount path (see https://github.com/bitnami/charts/blob/main/bitnami/minio/values.yaml#L1012)? Please note it was replaced at https://github.com/bitnami/charts/commit/e707712fbd687ac271fdcecdf415f4f2a6aeb76e
tested with default mountPath: /bitnami/minio/data , now it's persisted.
though I still dont understand why mountPath would lead to this issue.
Hi @robinliubin
The new container image expects the data to be mounted on a different path, see value for MINIO_DATA_DIR:
- https://github.com/bitnami/containers/tree/main/bitnami/minio#customizable-environment-variables
Therefore, the mount path must be aligned with that.
@juan131, if it has to be static, then helm should not expose it, avoiding wrongly modifying the value
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.