TiDB operator cannot start the cluster when config field is not provided
Bug Report
What version of Kubernetes are you using? Client Version: v1.31.1 Kustomize Version: v5.4.2 Server Version: v1.29.1
What version of TiDB Operator are you using? v1.6.0
What's the status of the TiDB cluster pods?
All the pods are in the ContainerCreating state, because the referenced ConfigMap does not exist.
What did you do?
We created a tidb cluster without specifying the spec.tidb.config property.
Reproduction Steps
- Deploy a TiDB cluster without the
spec.tidb.configproperty in the CR, e.g.:
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: test-cluster
spec:
configUpdateStrategy: RollingUpdate
enableDynamicConfiguration: true
helper:
image: alpine:3.16.0
pd:
baseImage: pingcap/pd
config: "[dashboard]\n internal-proxy = true\n"
maxFailoverCount: 0
mountClusterClientSecret: true
replicas: 3
requests:
storage: 10Gi
pvReclaimPolicy: Retain
tidb:
baseImage: pingcap/tidb
maxFailoverCount: 0
replicas: 3
service:
externalTrafficPolicy: Local
type: NodePort
tikv:
baseImage: pingcap/tikv
config: |
[raftdb]
max-open-files = 256
[rocksdb]
max-open-files = 128
maxFailoverCount: 0
mountClusterClientSecret: true
replicas: 3
requests:
storage: 100Gi
timezone: UTC
version: v8.1.0
What did you expect to see?
We expected to see that the cluster started running and be in Healthy state
What did you see instead?
We saw that all Pods were stuck in the ContainerCreating state due to the configMap did not exist. The same issue happened when we try to create TiKV without spec.tikv.config which suggests that all TiDB components are affected by this bug.
Thanks for your report. Currently, we need to config an empty config for these components to let operator create an empty ConfigMap.
I see. Should the default value for the config get changed to empty string?
I see. Should the default value for the
configget changed to empty string?
yes. an empty string may be better.