charts
charts copied to clipboard
ERROR 2002 Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)
Name and Version
bitnami/charts/[email protected]
What steps will reproduce the bug?
I can build and use the bitnami MySQL container locally, but when I deploy to K8 (Rancher) I can no longer start the mysqld.
Install chart:
helm upgrade \
--install \
--namespace=$HELM_RELEASE_NAME_ENV_VAR \
--kube-apiserver=$KUBE_API_ENV_VAR \
--kube-token=$KUBE_TOKEN_ENV_VAR \
$NAME_ENV_VAR \
chart/
Once deployed, the Mysql Daemon will not start:
CMD: mysql -v
RESP: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)
CMD: mysqld
RESP: [Server] Failed to set datadir to '/bitnami/mysql/data/' (OS errno: 2 - No such file or directory)
NOTE: When using the bitnami container locally, there are no issues everything works as intended.
Are you using any custom parameters or values?
No.
What is the expected behavior?
I expect the MySQL Daemon to start and begin accepting connections.
What do you see instead?
[ERROR] [MY-NNNNNN] [Server] Aborting
Additional information
values:
##
##--------------------------------------------
## @section MySQL parameters
## Configures MySQL Deployment
## ref: https://artifacthub.io/packages/helm/bitnami/mysql/8.7.3
##--------------------------------------------
##
mysql:
## Bitnami MySQL image
## ref: https://github.com/bitnami/bitnami-docker-mysql
## @param mysql.image.registry MySQL image registry
## @param mysql.image.repository MySQL image repository
## @param mysql.image.tag MySQL image tag (immutable tags are recommended)
## @param mysql.image.pullPolicy MySQL image pull policy
## @param mysql.image.pullSecrets Specify docker-registry secret names as an array
## @extra mysql.image.debug Specify if debug logs should be enabled
image:
registry: docker.io
repository: bitnami/mysql
tag: 8.0.26-debian-10-r0
debug: true
pullPolicy: IfNotPresent
pullSecrets: []
## @param mysql.architecture MySQL architecture (`standalone` or `replication`)
# architecture: replication
diagnosticMode:
enabled: true
auth:
## @extra mysql.auth.rootPassword [string] Password for the `root` user. Ignored if existing secret is provided. Generated if blank.
## ref: https://github.com/bitnami/bitnami-docker-mysql#setting-the-root-password-on-first-run
rootPassword: passwordDefinedHere
## @param mysql.auth.forcePassword Force users to specify required passwords
forcePassword: true
## @param mysql.auth.database Name for a custom database to create
## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-on-first-run
database: databaseName
## @param mysql.auth.username Name for a custom user to create
## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-user-on-first-run
username: userName
## @extra mysql.auth.password [string] Password for the new user. Ignored if existing secret is provided
password: userPassword
## @param mysql.auth.replicationUser MySQL replication user
## ref: https://github.com/bitnami/bitnami-docker-mysql#setting-up-a-replication-cluster
# replicationUser: "
## @extra mysql.auth.replicationPassword [string] MySQL replication user password. Ignored if existing secret is provided
# replicationPassword: "
## @extra mysql.auth.existingSecret Use existing secret for password details. The secret has to contain the keys `mysql-root-password`, `mysql-replication-password` and `mysql-password`
## NOTE: When existingSecret is set, the auth.rootPassword, auth.password, auth.replicationPassword provided values are ignored.
# existingSecret: ""
## Upon starting, the container will always execute files with extensions .sh, .sql and .sql.gz located at /docker-entrypoint-initdb.d
## See bitnami/mysql chart documentation for syntax
## @extra mysql.primary.extraVolumeMounts May be useful for recovery/initialization from an existin logical backup (.sql)
## extraVolumeMounts: []
## @extra mysql.primary.extraVolumes May be useful for recovery/initialization from an existin logical backup (.sql)
## extraVolumes: []
##
## @param mysql.primary.resources Resources applied to primary node
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 4Gi
## @param mysql.primary.startupProbe.enabled Will break deployment if kubernetes version doesn't support this feature yet.
startupProbe:
enabled: false
## Configure extra options for liveness probe
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param secondary.livenessProbe.enabled Enable livenessProbe
livenessProbe:
enabled: false
## Configures primary mysql node
## ref: https://github.com/bitnami/charts/tree/master/bitnami/mysql/templates/primary
primary:
## MySQL primary Pod security context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param primary.podSecurityContext.enabled Enable security context for MySQL primary pods
## @param primary.podSecurityContext.fsGroup Group ID for the mounted volumes' filesystem
##
podSecurityContext:
enabled: true
fsGroup: 1001
## MySQL primary container security context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param primary.containerSecurityContext.enabled MySQL primary container securityContext
## @param primary.containerSecurityContext.runAsUser User ID for the MySQL primary container
## @param primary.containerSecurityContext.runAsNonRoot Set MySQL primary container's Security Context runAsNonRoot
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
# Persistence for the primary node
persistence:
## @param mysql.primary.persistence.enabled Enable persistence on MySQL primary replicas using a `PersistentVolumeClaim`. If false, use emptyDir
enabled: true
## @param mysql.primary.persistence.size MySQL primary persistent volume size
size: 1Gi
## @param primary.persistence.accessModes MySQL primary persistent volume access Modes
## Docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 4Gi
## MySQL Primary Service parameters
##
service:
## @param primary.service.type MySQL Primary K8s service type
##
type: ClusterIP
## @param primary.service.ports.mysql MySQL Primary K8s service port
##
ports:
mysql: 3306
secondary:
# Persistence for the primary node
persistence:
## @param mysql.primary.persistence.enabled Enable persistence on MySQL primary replicas using a `PersistentVolumeClaim`. If false, use emptyDir
enabled: true
## @param mysql.primary.persistence.size MySQL primary persistent volume size
size: 1Gi
## @param primary.persistence.accessModes MySQL primary persistent volume access Modes
## Docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
Hi @TheMagicNacho
In your configuration, the diagnosticMode is enabled. Running it in that mode all probes are disabled and the command is overwritten with:
command:
- sleep
args:
- infinity
So mysqld is not running by default in that mode. diagnosticMode is useful for debugging but you need to run mysqld manually:
Get the list of pods by executing:
kubectl get pods --namespace default -l app.kubernetes.io/instance=mysqld
Access the pod you want to debug by executing
kubectl exec --namespace default -ti <NAME OF THE POD> -- bash
In order to replicate the container startup scripts execute this command:
/opt/bitnami/scripts/mysql/entrypoint.sh /opt/bitnami/scripts/mysql/run.sh
Thank you for the help. The issue ended up being environment specific and not with bitnami. We ended up changing the platform design.