m3db-operator
m3db-operator copied to clipboard
Operator Not Creating PV In Correct Zone
-
What version of the operator are you running? v0.13.0
-
What version of Kubernetes are you running? Running on GKE v1.21.6-gke.1503
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:41:42Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.6-gke.1503", GitCommit:"2c7bbda09a9b7ca78db230e099cf90fe901d3df8", GitTreeState:"clean", BuildDate:"2022-02-18T03:17:45Z", GoVersion:"go1.16.9b7", Compiler:"gc", Platform:"linux/amd64"}
-
What are you trying to do? Get the m3db cluster up and running with a persistent volume
-
What did you expect to happen? All 3 nodes should come up properly using the correct PV.
-
What happened? Only 2 of the 3 nodes are coming up properly. 1 PV is being deployed in us-central1-a, and 2 PV is being deployed in us-central1-b. One PV should be deployed in each region. It seems like operator is ignoring this part of the config
- name: zone-c
numInstances: 1
storageClassName: ssd-retain
nodeAffinityTerms:
- key: topology.kubernetes.io/zone
values:
- us-central1-c
This is the config for deploying the cluster
apiVersion: operator.m3db.io/v1alpha1
kind: M3DBCluster
metadata:
name: m3db-cluster-pv
spec:
image: quay.io/m3db/m3dbnode:latest
replicationFactor: 3
numberOfShards: 1024
# configMapName: m3db-cluster-pv
isolationGroups:
- name: zone-a
numInstances: 1
storageClassName: ssd-retain
nodeAffinityTerms:
- key: topology.kubernetes.io/zone
values:
- us-central1-a
- name: zone-b
numInstances: 1
storageClassName: ssd-retain
nodeAffinityTerms:
- key: topology.kubernetes.io/zone
values:
- us-central1-b
- name: zone-c
numInstances: 1
storageClassName: ssd-retain
nodeAffinityTerms:
- key: topology.kubernetes.io/zone
values:
- us-central1-c
etcdEndpoints:
- http://etcd-0.etcd:2379
- http://etcd-1.etcd:2379
- http://etcd-2.etcd:2379
namespaces:
- name: metrics-10s:2d
preset: 10s:2d
podIdentityConfig:
# Using no sources will default to just PodName, which is what we want as
# remote PVs can move around with the pod
sources: []
dataDirVolumeClaimTemplate:
metadata:
name: m3db-data
spec:
accessModes:
- ReadWriteOnce
# this field will be overwritten per-statefulset
storageClassName: unused
resources:
requests:
storage: 350Gi
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-retain
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-ssd
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-central1-a
- us-central1-b
- us-central1-c
If I do this, the PV will be created properly. Not sure why the StorageClass above doesn't work though.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-retain-us-central1-a
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-ssd
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-central1-a
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-retain-us-central1-b
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-ssd
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-central1-b
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-retain-us-central1-c
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
type: pd-ssd
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-central1-c