hsds
hsds copied to clipboard
Use label selectors to filter Kubernetes pods
It seems that the HSDS pod filter logic for a Kubernetes deployment is currently hardcoded as looking up for a single pod label with a fixedname (app
) that matched the value set in the k8s_app_label
entry in the HSDS config.yml
file. This is not very flexible, as some K8s environment and deployment scenarios may already use the app
label for other things which may conflict with the way HSDS expects to use it.
I am currently facing this issue while working in a on-premisses Kubernetes environment where, due to some infrastructure rules and constraints, we need to run two HSDS instances: One for a development environment, another for production environment. The way the K8s, CI/CD and DevOps infrastructure is set up, both HSDS instances will run on the same K8s namespace and all its pods will share the same app
label. The distinction between pods from different environments are made with another label env
added to each pod.
If we were to check the pods manually using the kubectl
or the K8s API, we could fetch each HSDS instance pods using label selectors. E.g.:
# All HSDS pods:
$ kubectl get pods -l app=hsds
NAME READY STATUS RESTARTS AGE
hsds-dn-dev-gfq8d 1/1 Running 0 90s
hsds-dn-dev-kwhjw 1/1 Running 0 90s
hsds-dn-prod-lmvlg 1/1 Running 0 79s
hsds-dn-prod-mdbzd 1/1 Running 0 80s
hsds-dn-prod-tdxbm 1/1 Running 0 79s
hsds-sn-dev 1/1 Running 0 90s
hsds-sn-prod 1/1 Running 0 79s
# HSDS pods from the development instance:
$ kubectl get pods -l app=hsds,env=dev
NAME READY STATUS RESTARTS AGE
hsds-dn-dev-gfq8d 1/1 Running 0 94s
hsds-dn-dev-kwhjw 1/1 Running 0 94s
hsds-sn-dev 1/1 Running 0 94s
# HSDS pods from the development instance (only the data nodes):
$ kubectl get pods -l app=hsds,env=dev,nodeType=dn
NAME READY STATUS RESTARTS AGE
hsds-dn-dev-gfq8d 1/1 Running 0 3m47s
hsds-dn-dev-kwhjw 1/1 Running 0 3m47s
It would be nice if HSDS could support setting a K8s label selector expression in its config.yml
to make it more flexible for each K8s deployment to set its labels on the pods according to its needs and constraints.
Agree that would be nice - and I see you've already submitted a PR for this, thanks! I'll take a look at the PR tomorrow.
PR is merged, so closing this.