charts icon indicating copy to clipboard operation
charts copied to clipboard

[bitnami/minio] 2024.1.18-debian-11-r1 cannot persiste data when pod restart, while 2023.5.18-debian-11-r2 works fine

Open robinliubin opened this issue 1 year ago • 7 comments

Name and Version

bitnami/minio 13.2.1

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. with below images in minio-values.yaml:
image:
  registry: docker.io
  repository: bitnami/minio
  tag: 2024.1.18-debian-11-r1
clientImage:
  registry: docker.io
  repository: bitnami/minio-client
  tag: 2024.1.18-debian-11-r1

# ignored lines

defaultBuckets: "test"

provisioning:
  enabled: true
  policies:
    - name: test
      statements:
        - effect: "Allow"
          actions: ["s3:*"]
          resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]

  users:
    - username: 'test'
      password: 'testtest'
      policies:
        - test
  1. install minio:
helm upgrade --install minio minio-13.2.1.tgz -f minio-values.yaml -n minio
  1. run
mc alias set myminio https://<minio-ingress-host> test testtest
  1. expecting to see Added myminio successfully.

  2. restart minio pod manually with kubectl delete pod -l app.kubernetes.io/instance=minio -n minio

  3. wait for minio pod running, run:

mc alias set myminio https://<minio-ingress-host> test testtest

now error with mc: <ERROR> Unable to initialize new alias from the provided credentials. The Access Key Id you provided does not exist in our records.

  1. now change the image tags to:
image:
  registry: docker.io
  repository: bitnami/minio
  tag: 2023.5.18-debian-11-r2
clientImage:
  registry: docker.io
  repository: bitnami/minio-client
  tag: 2023.5.18-debian-11-r2
  1. run same steps 2-6, without error.

Are you using any custom parameters or values?

image:
  registry: docker.io
  repository: bitnami/minio
  tag: 2024.1.18-debian-11-r1
clientImage:
  registry: docker.io
  repository: bitnami/minio-client
  tag: 2024.1.18-debian-11-r1

mode: standalone
auth:
  rootUser: admin
  rootPassword: '{{ minio_admin_password }}'

defaultBuckets: "test"

provisioning:
  enabled: true
  policies:
    - name: test
      statements:
        - effect: "Allow"
          actions: ["s3:*"]
          resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]

  users:
    - username: 'test'
      password: 'testtest'
      policies:
        - test

containerPorts:
  api: 9000
  console: 9001

apiIngress:
  enabled: true
  hostname: ' {{ minio_ingress_host }}'
  path: "/"
  servicePort: minio-api
  ingressClassName: 'nginx'
  tls: true
  extraTls:
    - secretName: tls-secret
      hosts:
        - ' {{ minio_ingress_host }}'

ingress:
  enabled: true
  hostname: ' {{ minio_ingress_host }}'
  path: "/"
  servicePort: minio-console
  ingressClassName: 'nginx'
  tls: true

persistence:
  enabled: true
  mountPath: /data
  accessModes:
    - ReadWriteOnce
  size: '100Gi'
  annotations: { }
  existingClaim: ""

What is the expected behavior?

expecting minio persists data after pod restart

What do you see instead?

minio pod lost user credentials

robinliubin avatar Jan 30 '24 19:01 robinliubin

Hi @robinliubin

Please correct me if I'm wrong but AFAIK aliases are stored on the local mc configuration, see:

  • https://min.io/docs/minio/linux/reference/minio-mc/mc-alias-set.html

In other words, it's sth saved on the "client side" instead of the "server side".

We don't persist the client side config on MinIO containers, therefore it's normal to lose these aliases if the container/pod gets recreated.

juan131 avatar Feb 01 '24 11:02 juan131

@juan131 Thanks for helping. The issue we observed is not "client side" losing the alias, but the "server-side" lost provisioned user credentials. you can see in the values.yaml, provisioning is enabled on server-side.

however on image tag 2024.1.18-debian-11-r1, when pod is restarted, provisioned data is lost while only changing image tag to 2023.5.18-debian-11-r2, when pod is restarted, provisioned data is persisted.

robinliubin avatar Feb 01 '24 14:02 robinliubin

Hi @robinliubin

I was unable to reproduce the issue using the values.yaml below:

defaultBuckets: "test"
provisioning:
  enabled: true
  policies:
    - name: test
      statements:
        - effect: "Allow"
          actions: ["s3:*"]
          resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
  users:
    - username: 'test'
      password: 'testtest'
      policies:
        - test

These are the steps I followed:

  • Install the chart & ensure provisioning worked as expected:
$ helm install minio oci://registry-1.docker.io/bitnamicharts/minio -f minio.yaml
NAME: minio
(...)
CHART NAME: minio
CHART VERSION: 13.3.4
APP VERSION: 2024.2.4
(...)
$ kubectl logs -l app.kubernetes.io/component=minio-provisioning -c minio
│ 127.0.0.1:9000 │ ✔      │
└────────────────┴────────┘

Restarted `provisioning` successfully in 503 milliseconds
Created policy `test` successfully.
Added user `test` successfully.
Attached Policies: [test]
To User: test
Enabled user `test` successfully.
End Minio provisioning
  • Create a "minio-client" pod to run the mc alias command:
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
  • Delete MinIO pod manually and create again a "minio-client" pod to run the mc alias command:
$ kubectl delete pod -l app.kubernetes.io/instance=minio
pod "minio-7fc546fdff-qqj2m" deleted
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted

juan131 avatar Feb 06 '24 08:02 juan131

persistence:
  enabled: true
  mountPath: /data
  accessModes:
    - ReadWriteOnce
  size: '100Gi'
  annotations:
    "helm.sh/resource-policy": keep
  existingClaim: ""

in my test, persistence makes the diff, if this section is added, this issue is reproducible.

robinliubin avatar Feb 13 '24 22:02 robinliubin

Hi @robinliubin

I'm also enabling persistence in my tests (it's enabled by default). Why did you change the default mount path (see https://github.com/bitnami/charts/blob/main/bitnami/minio/values.yaml#L1012)? Please note it was replaced at https://github.com/bitnami/charts/commit/e707712fbd687ac271fdcecdf415f4f2a6aeb76e

juan131 avatar Feb 15 '24 11:02 juan131

tested with default mountPath: /bitnami/minio/data , now it's persisted. though I still dont understand why mountPath would lead to this issue.

robinliubin avatar Feb 15 '24 15:02 robinliubin

Hi @robinliubin

The new container image expects the data to be mounted on a different path, see value for MINIO_DATA_DIR:

  • https://github.com/bitnami/containers/tree/main/bitnami/minio#customizable-environment-variables

Therefore, the mount path must be aligned with that.

juan131 avatar Feb 19 '24 07:02 juan131

@juan131, if it has to be static, then helm should not expose it, avoiding wrongly modifying the value

robinliubin avatar Mar 05 '24 19:03 robinliubin

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] avatar Mar 21 '24 01:03 github-actions[bot]

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

github-actions[bot] avatar Mar 27 '24 01:03 github-actions[bot]