charts
charts copied to clipboard
[bitnami/minio] Deployment Errors
Name and Version
bitnami/minio:14.1.7
What architecture are you using?
arm64
What steps will reproduce the bug?
Hey, I tried to Deploy a Minio Cluster in distributed mode inside my Kubernetes Cluster, but the Deployment does not working. I get several Errors inside the minio pods:
[root::bastion-01-de-nbg1-dc3]
~/minio-cluster: kubectl --namespace minio-cluster logs pods/minio-cluster-0 -f
19:13:09.31 INFO ==>
19:13:09.31 INFO ==> Welcome to the Bitnami minio container
19:13:09.31 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
19:13:09.31 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
19:13:09.32 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise
19:13:09.32 INFO ==>
19:13:09.32 INFO ==> ** Starting MinIO setup **
19:13:09.41 INFO ==> ** MinIO setup finished! **
minio 19:13:09.43 INFO ==> ** Starting MinIO **
Waiting for at least 1 remote servers with valid configuration to be online
Following servers are currently offline or unreachable [http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000->http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 is unreachable: remote disconnected http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000->http://minio-cluster-1.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 is unreachable: remote disconnected http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000->http://minio-cluster-2.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 is unreachable: remote disconnected]
API: SYSTEM.grid
Time: 19:13:11 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.018s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
API: SYSTEM.internal
Time: 19:13:11 UTC 04/15/2024
Error: Read failed. Insufficient number of drives online (*errors.errorString)
11: internal/logger/logger.go:259:logger.LogIf()
10: cmd/logging.go:90:cmd.internalLogIf()
9: cmd/prepare-storage.go:243:cmd.connectLoadInitFormats()
8: cmd/prepare-storage.go:286:cmd.waitForFormatErasure()
7: cmd/erasure-server-pool.go:129:cmd.newErasureServerPools.func1()
6: cmd/server-main.go:512:cmd.bootstrapTrace()
5: cmd/erasure-server-pool.go:128:cmd.newErasureServerPools()
4: cmd/server-main.go:1066:cmd.newObjectLayer()
3: cmd/server-main.go:815:cmd.serverMain.func10()
2: cmd/server-main.go:512:cmd.bootstrapTrace()
1: cmd/server-main.go:813:cmd.serverMain()
API: SYSTEM.internal
Time: 19:13:11 UTC 04/15/2024
Error: Read failed. Insufficient number of drives online (*errors.errorString)
11: internal/logger/logger.go:259:logger.LogIf()
10: cmd/logging.go:90:cmd.internalLogIf()
9: cmd/prepare-storage.go:243:cmd.connectLoadInitFormats()
8: cmd/prepare-storage.go:304:cmd.waitForFormatErasure()
7: cmd/erasure-server-pool.go:129:cmd.newErasureServerPools.func1()
6: cmd/server-main.go:512:cmd.bootstrapTrace()
5: cmd/erasure-server-pool.go:128:cmd.newErasureServerPools()
4: cmd/server-main.go:1066:cmd.newObjectLayer()
3: cmd/server-main.go:815:cmd.serverMain.func10()
2: cmd/server-main.go:512:cmd.bootstrapTrace()
1: cmd/server-main.go:813:cmd.serverMain()
Waiting for a minimum of 2 drives to come online (elapsed 0s)
API: SYSTEM.grid
Time: 19:13:12 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.812s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
Waiting for all other servers to be online to format the drives (elapses 1s)
Waiting for all other servers to be online to format the drives (elapses 2s)
API: SYSTEM.grid
Time: 19:13:14 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.81s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
Waiting for all other servers to be online to format the drives (elapses 3s)
Waiting for all other servers to be online to format the drives (elapses 4s)
API: SYSTEM.grid
Time: 19:13:16 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.44s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
Waiting for all other servers to be online to format the drives (elapses 5s)
API: SYSTEM.grid
Time: 19:13:17 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.911s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
Waiting for all other servers to be online to format the drives (elapses 6s)
Waiting for all other servers to be online to format the drives (elapses 7s)
API: SYSTEM.grid
Time: 19:13:19 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.045s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
Waiting for all other servers to be online to format the drives (elapses 8s)
API: SYSTEM.grid
Time: 19:13:20 UTC 04/15/2024
Error: grid: http://minio-cluster-0.minio-cluster-headless.minio-cluster.svc.cluster.local:9000 connecting to ws://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/minio/grid/v1: lookup minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local on 169.254.25.10:53: no such host (*net.DNSError) Sleeping 1.161s (3) (*fmt.wrapError)
6: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
5: internal/logger/logonce.go:149:logger.LogOnceIf()
4: internal/grid/connection.go:59:grid.gridLogOnceIf()
3: internal/grid/connection.go:682:grid.(*Connection).connect.func1()
2: internal/grid/connection.go:688:grid.(*Connection).connect()
1: internal/grid/connection.go:260:grid.newConnection.func1()
Waiting for all other servers to be online to format the drives (elapses 9s)
Waiting for all other servers to be online to format the drives (elapses 10s)
Formatting 1st pool, 1 set(s), 4 drives per set.
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 56.780541ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 1 Online, 3 Offline.
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 5.327665ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 3 Online, 1 Offline.
API: SYSTEM.peers
Time: 19:13:26 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-2.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-2.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 9.204403ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 2 Online, 2 Offline.
API: SYSTEM.peers
Time: 19:13:30 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 7.236494ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 3 Online, 1 Offline.
API: SYSTEM.peers
Time: 19:14:14 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 9.301661ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 3 Online, 1 Offline.
API: SYSTEM.peers
Time: 19:15:12 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 4.716771ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 2 Online, 2 Offline.
API: SYSTEM.peers
Time: 19:15:57 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-2.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-2.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
API: SYSTEM.peers
Time: 19:15:57 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-1.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-1.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 5.465655ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 3 Online, 1 Offline.
API: SYSTEM.peers
Time: 19:17:29 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Restarting on service signal
Waiting for all MinIO sub-systems to be initialize...
Automatically configured API requests per node based on available memory on the system: 44
All MinIO sub-systems initialized successfully in 9.119629ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: DEVELOPMENT.2024-04-06T05-26-02Z (go1.21.9 linux/arm64)
API: http://localhost:9000
WebUI: http://10.233.123.25:9001 http://127.0.0.1:9001
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 3 Online, 1 Offline.
API: SYSTEM.peers
Time: 19:20:51 UTC 04/15/2024
DeploymentID: 89a08ab8-0af0-4e23-9930-0f302f7e0a8f
Error: Drive: http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data returned drive not found (*fmt.wrapError)
endpoint="http://minio-cluster-3.minio-cluster-headless.minio-cluster.svc.cluster.local:9000/bitnami/minio/data"
4: internal/logger/logger.go:249:logger.LogAlwaysIf()
3: cmd/logging.go:46:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.glob..func24.1()
1: cmd/erasure-sets.go:235:cmd.(*erasureSets).connectDisks.func2()
Are you using any custom parameters or values?
global:
storageClass: "rook-cephfs"
mode: distributed
auth:
## @param auth.rootUser MinIO® root username
##
rootUser: admin
## @param auth.rootPassword Password for MinIO® root user
##
rootPassword: "Test2024"
## @param auth.existingSecret Use existing secret for credentials details (`auth.rootUser` and `auth.rootPassword` will be ignored and picked up from this secret). The secret has to contain the keys `root-user` and `root-password`)
##
existingSecret: ""
useCredentialsFiles: false
forceNewKeys: false
disableWebUI: false
tls:
enabled: false
autoGenerated: false
existingSecret: ""
provisioning:
enabled: true
resourcesPreset: "medium"
## @param provisioning.policies MinIO® policies provisioning
## https://docs.min.io/docs/minio-admin-complete-guide.html#policy
## e.g.
## policies:
## - name: custom-bucket-specific-policy
## statements:
## - resources:
## - "arn:aws:s3:::my-bucket"
## actions:
## - "s3:GetBucketLocation"
## - "s3:ListBucket"
## - "s3:ListBucketMultipartUploads"
## - resources:
## - "arn:aws:s3:::my-bucket/*"
## # Allowed values: "Allow" | "Deny"
## # Defaults to "Deny" if not specified
## effect: "Allow"
## actions:
## - "s3:AbortMultipartUpload"
## - "s3:DeleteObject"
## - "s3:GetObject"
## - "s3:ListMultipartUploadParts"
## - "s3:PutObject"
policies: []
## @param provisioning.users MinIO® users provisioning. Can be used in addition to provisioning.usersExistingSecrets.
## https://docs.min.io/docs/minio-admin-complete-guide.html#user
## e.g.
## users:
## - username: test-username
## password: test-password
## disabled: false
## policies:
## - readwrite
## - consoleAdmin
## - diagnostics
## # When set to true, it will replace all policies with the specified.
## # When false, the policies will be added to the existing.
## setPolicies: false
users: []
## @param provisioning.usersExistingSecrets Array if existing secrets containing MinIO® users to be provisioned. Can be used in addition to provisioning.users.
## https://docs.min.io/docs/minio-admin-complete-guide.html#user
##
## Instead of configuring users inside values.yaml, referring to existing Kubernetes secrets containing user
## configurations is possible.
## e.g.
## usersExistingSecrets:
## - centralized-minio-users
##
## All provided Kubernetes secrets require a specific data structure. The same data from the provisioning.users example above
## can be defined via secrets with the following data structure. The secret keys have no meaning to the provisioning job except that
## they are used as filenames.
## ## apiVersion: v1
## ## kind: Secret
## ## metadata:
## ## name: centralized-minio-users
## ## type: Opaque
## ## stringData:
## ## username1: |
## ## username=test-username
## ## password=test-password
## ## disabled=false
## ## policies=readwrite,consoleAdmin,diagnostics
## ## setPolicies=false
usersExistingSecrets: []
## @param provisioning.groups MinIO® groups provisioning
## https://docs.min.io/docs/minio-admin-complete-guide.html#group
## e.g.
## groups
## - name: test-group
## disabled: false
## members:
## - test-username
## policies:
## - readwrite
## # When set to true, it will replace all policies with the specified.
## # When false, the policies will be added to the existing.
## setPolicies: false
groups: []
## @param provisioning.buckets MinIO® buckets, versioning, lifecycle, quota and tags provisioning
## Buckets https://docs.min.io/docs/minio-client-complete-guide.html#mb
## Lifecycle https://docs.min.io/docs/minio-client-complete-guide.html#ilm
## Quotas https://docs.min.io/docs/minio-admin-complete-guide.html#bucket
## Tags https://docs.min.io/docs/minio-client-complete-guide.html#tag
## Versioning https://docs.min.io/docs/minio-client-complete-guide.html#version
## e.g.
## buckets:
## - name: test-bucket
## region: us-east-1
## # Only when mode is 'distributed'
## # Allowed values: "Versioned" | "Suspended" | "Unchanged"
## # Defaults to "Suspended" if not specified.
## # For compatibility, accepts boolean values as well, where true maps
## # to "Versioned" and false to "Suspended".
## # ref: https://docs.minio.io/docs/distributed-minio-quickstart-guide
## versioning: Suspended
## # Versioning is automatically enabled if withLock is true
## # ref: https://docs.min.io/docs/minio-bucket-versioning-guide.html
## withLock: true
## # Only when mode is 'distributed'
## # ref: https://docs.minio.io/docs/distributed-minio-quickstart-guide
## lifecycle:
## - id: TestPrefix7dRetention
## prefix: test-prefix
## disabled: false
## expiry:
## days: 7
## # Days !OR! date
## # date: "2021-11-11T00:00:00Z"
## nonconcurrentDays: 3
## # Only when mode is 'distributed'
## # ref: https://docs.minio.io/docs/distributed-minio-quickstart-guide
## quota:
## # set (hard still works as an alias but is deprecated) or clear(+ omit size)
## type: set
## size: 10GiB
## tags:
## key1: value1
buckets:
- name: artifacts
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: external_diffs
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: uploads
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: lfs
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: packages
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: dependency_proxy
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: terraform_state
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: pages
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
- name: ci_secure_files
versioning: Versioned
quota:
type: set
size: 20GiB
region: de-nbg1-dc3
## @param provisioning.config MinIO® config provisioning
## https://docs.min.io/docs/minio-server-configuration-guide.html
## e.g.
## config:
## - name: region
## options:
## name: us-east-1
config:
- name: region
options:
name: de-nbg1-dc3
containerSecurityContext:
enabled: true
seLinuxOptions: {}
runAsUser: 1001
runAsGroup: 1001
runAsNonRoot: true
privileged: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: "RuntimeDefault"
containerSecurityContext:
enabled: true
seLinuxOptions: {}
runAsUser: 1001
runAsGroup: 1001
runAsNonRoot: true
privileged: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: "RuntimeDefault"
resourcesPreset: "medium"
ingress:
enabled: false
ingressClassName: ""
hostname: minio.local
path: /
pathType: ImplementationSpecific
## @param ingress.servicePort Service port to be used
## Default is http. Alternative is https.
##
servicePort: minio-console
annotations: {}
tls: false
selfSigned: false
apiIngress:
enabled: false
ingressClassName: ""
hostname: minio.local
path: /
pathType: ImplementationSpecific
## @param apiIngress.servicePort Service port to be used
## Default is http. Alternative is https.
##
servicePort: minio-api
annotations: {}
tls: false
selfSigned: false
persistence:
enabled: true
storageClass: "rook-cephfs"
mountPath: /bitnami/minio/data
accessModes:
- ReadWriteMany
size: 100Gi
metrics:
prometheusAuthType: public
serviceMonitor:
enabled: false
prometheusRule:
enabled: false
## @param metrics.prometheusRule.rules Prometheus Rule definitions
# - alert: minio cluster nodes offline
# annotations:
# summary: "minio cluster nodes offline"
# description: "minio cluster nodes offline, pod {{`{{`}} $labels.pod {{`}}`}} service {{`{{`}} $labels.job {{`}}`}} offline"
# for: 10m
# expr: minio_cluster_nodes_offline_total > 0
# labels:
# severity: critical
# group: PaaS
##
rules: []
What is the expected behavior?
No response
What do you see instead?
The provisioning Job does not Succeed and the minio cluster does not come up.
Job:
[root::bastion-01-de-nbg1-dc3]
~/minio-cluster: kubectl --namespace minio-cluster describe jobs.batch minio-cluster-provisioning
Name: minio-cluster-provisioning
Namespace: minio-cluster
Selector: batch.kubernetes.io/controller-uid=30e55ae4-bb17-4527-8578-5fac6ce27113
Labels: app.kubernetes.io/component=minio-provisioning
app.kubernetes.io/instance=minio-cluster
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=minio
app.kubernetes.io/version=2024.4.6
helm.sh/chart=minio-14.1.7
Annotations: helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation
Parallelism: 1
Completions: <unset>
Completion Mode: NonIndexed
Start Time: Mon, 15 Apr 2024 21:13:01 +0200
Pods Statuses: 0 Active (1 Ready) / 0 Succeeded / 1 Failed
Pod Template:
Labels: app.kubernetes.io/component=minio-provisioning
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/version=2024.4.6
batch.kubernetes.io/controller-uid=30e55ae4-bb17-4527-8578-5fac6ce27113
batch.kubernetes.io/job-name=minio-cluster-provisioning
controller-uid=30e55ae4-bb17-4527-8578-5fac6ce27113
helm.sh/chart=minio-14.1.7
job-name=minio-cluster-provisioning
Service Account: minio-cluster
Init Containers:
wait-for-available-minio:
Image: docker.io/bitnami/minio:2024.4.6-debian-12-r0
Port: <none>
Host Port: <none>
SeccompProfile: RuntimeDefault
Command:
/bin/bash
-c
set -e;
echo "Waiting for Minio";
wait-for-port \
--host=minio-cluster \
--state=inuse \
--timeout=120 \
9000;
echo "Minio is available";
Limits:
cpu: 750m
ephemeral-storage: 1Gi
memory: 1536Mi
Requests:
cpu: 500m
ephemeral-storage: 50Mi
memory: 1Gi
Environment: <none>
Mounts: <none>
Containers:
minio:
Image: docker.io/bitnami/minio:2024.4.6-debian-12-r0
Port: <none>
Host Port: <none>
SeccompProfile: RuntimeDefault
Command:
/bin/bash
-c
set -e; echo "Start Minio provisioning";
function attachPolicy() {
local tmp=$(mc admin $1 info provisioning $2 | sed -n -e 's/^Policy.*: \(.*\)$/\1/p');
IFS=',' read -r -a CURRENT_POLICIES <<< "$tmp";
if [[ ! "${CURRENT_POLICIES[*]}" =~ "$3" ]]; then
mc admin policy attach provisioning $3 --$1=$2;
fi;
};
function detachDanglingPolicies() {
local tmp=$(mc admin $1 info provisioning $2 | sed -n -e 's/^Policy.*: \(.*\)$/\1/p');
IFS=',' read -r -a CURRENT_POLICIES <<< "$tmp";
IFS=',' read -r -a DESIRED_POLICIES <<< "$3";
for current in "${CURRENT_POLICIES[@]}"; do
if [[ ! "${DESIRED_POLICIES[*]}" =~ "${current}" ]]; then
mc admin policy detach provisioning $current --$1=$2;
fi;
done;
}
function addUsersFromFile() {
local username=$(grep -oP '^username=\K.+' $1);
local password=$(grep -oP '^password=\K.+' $1);
local disabled=$(grep -oP '^disabled=\K.+' $1);
local policies_list=$(grep -oP '^policies=\K.+' $1);
local set_policies=$(grep -oP '^setPolicies=\K.+' $1);
mc admin user add provisioning "${username}" "${password}";
IFS=',' read -r -a POLICIES <<< "${policies_list}";
for policy in "${POLICIES[@]}"; do
attachPolicy user "${username}" "${policy}";
done;
if [ "${set_policies}" == "true" ]; then
detachDanglingPolicies user "${username}" "${policies_list}";
fi;
local user_status="enable";
if [[ "${disabled}" != "" && "${disabled,,}" == "true" ]]; then
user_status="disable";
fi;
mc admin user "${user_status}" provisioning "${username}";
}; mc alias set provisioning $MINIO_SCHEME://minio-cluster:9000 $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD; mc admin config set provisioning region name=de-nbg1-dc3;
mc admin service restart provisioning; mc mb provisioning/artifacts --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/artifacts --size 20GiB; mc version enable provisioning/artifacts; mc mb provisioning/external_diffs --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/external_diffs --size 20GiB; mc version enable provisioning/external_diffs; mc mb provisioning/uploads --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/uploads --size 20GiB; mc version enable provisioning/uploads; mc mb provisioning/lfs --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/lfs --size 20GiB; mc version enable provisioning/lfs; mc mb provisioning/packages --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/packages --size 20GiB; mc version enable provisioning/packages; mc mb provisioning/dependency_proxy --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/dependency_proxy --size 20GiB; mc version enable provisioning/dependency_proxy; mc mb provisioning/terraform_state --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/terraform_state --size 20GiB; mc version enable provisioning/terraform_state; mc mb provisioning/pages --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/pages --size 20GiB; mc version enable provisioning/pages; mc mb provisioning/ci_secure_files --ignore-existing --region=de-nbg1-dc3 ; mc quota set provisioning/ci_secure_files --size 20GiB; mc version enable provisioning/ci_secure_files;
echo "End Minio provisioning";
Limits:
cpu: 750m
ephemeral-storage: 1Gi
memory: 1536Mi
Requests:
cpu: 500m
ephemeral-storage: 50Mi
memory: 1Gi
Environment:
MINIO_SCHEME: http
MINIO_ROOT_USER: <set to the key 'root-user' in secret 'minio-cluster'> Optional: false
MINIO_ROOT_PASSWORD: <set to the key 'root-password' in secret 'minio-cluster'> Optional: false
Mounts:
/.mc from empty-dir (rw,path="app-mc-dir")
/etc/ilm from minio-provisioning (rw)
/opt/bitnami/minio/tmp from empty-dir (rw,path="app-tmp-dir")
/tmp from empty-dir (rw,path="tmp-dir")
Volumes:
empty-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
minio-provisioning:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: minio-cluster-provisioning
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 17m job-controller Created pod: minio-cluster-provisioning-ldcv8
Normal SuccessfulDelete 9m15s job-controller Deleted pod: minio-cluster-provisioning-ldcv8
Warning BackoffLimitExceeded 9m15s job-controller Job has reached the specified backoff limit
[root::bastion-01-de-nbg1-dc3]
~/minio-cluster:
Pod logs:
Additional information
No response
Hi @pomland-94 ,
I could see this error related to bucket name from provisioning pod using your values:
$ kubectl logs minio-cluster-provisioning-7z4rv
Warning: Use tokens from the TokenRequest API or manually created secret-based tokens instead of auto-generated secret-based tokens.
Defaulted container "minio" out of: minio, wait-for-available-minio (init)
Start Minio provisioning
Added `provisioning` successfully.
Successfully applied new settings.
Please restart your server 'mc admin service restart provisioning'.
Restart command successfully sent to `provisioning`. Type Ctrl-C to quit or wait to follow the status of the restart process.
┌────────────────────────────────────────────────────┬────────┐
│ HOST │ STATUS │
├────────────────────────────────────────────────────┼────────┤
│ minio-1.minio-headless.test.svc.cluster.local:9000 │ ✔ │
│ minio-0.minio-headless.test.svc.cluster.local:9000 │ ✔ │
│ minio-2.minio-headless.test.svc.cluster.local:9000 │ ✔ │
│ minio-3.minio-headless.test.svc.cluster.local:9000 │ ✔ │
└────────────────────────────────────────────────────┴────────┘
Restarted `provisioning` successfully in 1 seconds
Bucket created successfully `provisioning/artifacts`.
Successfully set bucket quota of 20 GiB on `artifacts`
provisioning/artifacts versioning is enabled
mc: <ERROR> Unable to make bucket `provisioning/external_diffs`. Bucket name contains invalid characters
I hope it helps
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.