operator
operator copied to clipboard
MinIO Console shows 'x509: certificate is valid for ingress.local' at login
I set up a MinIO at minikube, then I created a new tenant named second at minio-second namespace (no nodes affinity, exposing console, TLS cert is autogenerated). Everything looks fine from k8s perspective but tenant is shown as unhealthy. Plus it stays at Provisioning initial users state for a long time. When I try to log in into MinIO console I get the error:
Post \"https://minio.minio-second.svc.cluster.local/\": x509: certificate is valid for ingress.local, not minio.minio-second.svc.cluster.local.
Using devtools I see that /api/v1/login returns 401 and this message. I checked that k8s csr had been created and approved.
I can't understand which certificate it is reporting about because the generated one has different subjects.
A few screenshots for better understanding:
Expected Behavior
Login into MinIO console using default creds.
Current Behavior
Can't login.
Context
I try to get a working installation to make estimation about its features and advantages comparing with Ceph s3 feature.
Your Environment
- Version used (
minio --version):v5.0.6 - Server setup and configuration:
minikube version: v1.31.1 - Operating System and version (
uname -a):Linux 5.19.0-46-generic minio/minio#47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 21 15:35:31 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Resources of minio-second namespace:
~$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/second-pool-0-0 2/2 Running 0 70m
pod/second-pool-0-1 2/2 Running 0 70m
pod/second-pool-0-2 2/2 Running 0 70m
pod/second-pool-0-3 2/2 Running 0 70m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio LoadBalancer 10.105.193.202 10.105.193.202 443:30355/TCP 70m
service/second-console LoadBalancer 10.103.216.96 10.103.216.96 9443:31358/TCP 70m
service/second-hl ClusterIP None <none> 9000/TCP 70m
NAME READY AGE
statefulset.apps/second-pool-0 4/4 70m
Please provide the steps to reproduce, it will help us to reproduce the problem quickly. And provide your tenant @nixargh
- Install & run minikube (you need docker.io as well)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb && sudo dpkg -i minikube_latest_amd64.deb
minikube start
- Deploy the MinIO Operator step-by-step like described.
- Run minikube tunnel to allow LoadBalancer usage (minikube tunnel)
Nothing interesting really. But I have an update because yesterday I did the same using "real" k8s and it set up allright. So propably minikube adds some peculiarity to the issue.
Tenant spec (almost default as described above):
metadata:
creationTimestamp: "2023-07-26T07:43:12Z"
generation: 1
name: second
namespace: minio-second
resourceVersion: "28398"
uid: b8cf1d12-012a-4057-bde3-84c80b0c0924
scheduler:
name: ""
spec:
configuration:
name: second-env-configuration
credsSecret:
name: second-secret
exposeServices:
console: true
minio: true
imagePullSecret: {}
mountPath: /export
pools:
- name: pool-0
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 2Gi
runtimeClassName: ""
servers: 4
volumeClaimTemplate:
metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "13421772800"
storageClassName: standard
status: {}
volumesPerServer: 2
requestAutoCert: true
users:
- name: second-user-0
status:
availableReplicas: 4
certificates:
autoCertEnabled: true
customCertificates: {}
currentState: Initialized
pools:
- legacySecurityContext: false
ssName: second-pool-0
state: PoolInitialized
provisionedUsers: true
revision: 0
syncVersion: ""
usage: {}
2. Deploy the MinIO Operator
About step2, I want to know specific step. @nixargh We don't do random reproductions.
I gave you a link to your documentation that I followed literraly but if you insist:
- kubectl krew update
- kubectl krew install minio
- kubectl minio version
- kubectl minio init
- kubectl minio proxy -n minio-operator
I gave you a link to your documentation that I followed literraly but if you insist:
- kubectl krew update
- kubectl krew install minio
- kubectl minio version
- kubectl minio init
- kubectl minio proxy -n minio-operator
Thanks. @nixargh Because some issues were added by the user with some ideas to use. In the end it turned out not to be a minio problem.
@jiuker I see. As I said the same script worked perfect with a real k8s cluster, so the difference is somewhere inside minikube (I can't say I know its internals) or maybe it is related with k8s version as I installed the latest minikube and probably with the latest stable k8s version there (haven't checked this).
While simulating the issue I see below errors in MinIO pods
API: SYSTEM()
Time: 12:54:07 UTC 11/28/2023
Error: unable to create (/export3/.minio.sys/tmp) file access denied, drive may be faulty please investigate (*fmt.wrapError)
6: internal/logger/logger.go:258:logger.LogIf()
5: cmd/prepare-storage.go:96:cmd.bgFormatErasureCleanupTmp()
4: cmd/xl-storage.go:263:cmd.newXLStorage()
3: cmd/object-api-common.go:63:cmd.newStorageAPI()
2: cmd/format-erasure.go:673:cmd.initStorageDisksWithErrors.func1()
1: github.com/minio/pkg/[email protected]/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
so it's basically about permissions for MinIO pods as minio user not allowed to write to paths like /export3/.minio.sys/tmp. As we are not using actual disks, these dirs like /export1 are used as disks and MinIO unable to write so MinIO cluster is not ready to use.
MinIO runs as non root and mount points (in this case dirs as it's standard storage class and local PVs) are owned by root as below
drwxr-xr-x. 1 root root 0 Nov 28 12:50 export1
drwxr-xr-x. 1 root root 0 Nov 28 12:50 export0
drwxr-xr-x. 1 root root 0 Nov 28 12:50 export3
drwxr-xr-x. 1 root root 0 Nov 28 12:50 export2
MinIO is unable to write metadata and so unable to start cluster properly.
We either have to fix the permissions manually for these dirs using chown -R 1000:1000 /export{0..3} or fix the PVCs by setting the right security context, like its there for MinIO pods
securityContext:
fsGroup: 1000
fsGroupChangePolicy: OnRootMismatch
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
I opted to change the permissions manullay, so using minikube ssh executed inside the minikube nodes and set the permissions for dirs. Post that removed the statefulset of MinIO (which would be re-created by operator) and tenant comes online properly.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 3h34m v1.28.3
minikube-m02 Ready <none> 3h33m v1.28.3
minikube-m03 Ready <none> 3h31m v1.28.3
minikube-m04 Ready <none> 3h30m v1.28.3
$ minikube ssh docker container ls -n minikube-m02 | grep tenant-1
adce41fffcba 1d25324726a2 "/minio-operator sid…" 2 hours ago Up 2 hours k8s_sidecar_tenant-1-pool-0-2_tenant-ns_cd93a469-2265-4b7d-9476-ae1ef9620ac6_0
afaf00b559cc 88c665b1183a "/usr/bin/docker-ent…" 2 hours ago Up 2 hours k8s_minio_tenant-1-pool-0-2_tenant-ns_cd93a469-2265-4b7d-9476-ae1ef9620ac6_0
c617bc3c53e9 registry.k8s.io/pause:3.9 "/pause" 2 hours ago Up 2 hours k8s_POD_tenant-1-pool-0-2_tenant-ns_cd93a469-2265-4b7d-9476-ae1ef9620ac6_0
read() failed: Connection reset by peer
$ minikube ssh "docker container exec -it -u 0 afaf00b559cc /bin/bash" -n minikube-m02
and then finally run chown -R 1000:1000 /export{0..3}. This needs to be repeated for all minikube nodes and the restart of MinIO pods would bring the cluster online.