pulumi-kubernetes-operator
pulumi-kubernetes-operator copied to clipboard
add kustomize and statefulset due lock problem
Hello!
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
- could you add kustomization for k8s manifests and CRDs? That will allow to reference to k8s manifests in single line of resources in other kustomizations
- could you convert Deployment kind of operator to StatefulSet or make locker name be configurable? That will solve the issue of lock file with name of pod that stuck the Stacks due to restarts of pod in ReplicaSet of Deployment
Affected area/feature
- deployment k8s manifests
Hi @zelig81 - thank you for this enhancement request! We'll take a look.
After a time of using with StatefulSet - it still does not help. After a restart pulumi in operator still remains locked.
Maybe as solution:
- Deployment (or StatefulSet) of Pulumi Operator should add some lock user name which is based on labels of the pod of the Operator?
- Then after a restart - Pulumi Operator should check if the lock is made by this instance of Pulumi Operator or by some other user.
- If the lock was made by this instance of Pulumi Operator, then remove it and proceed as usually
Good news everyone, we just release a preview of Pulumi Kubernetes Operator v2. This new release has a whole-new architecture that uses pods as the execution environment. The locking story is now significantly different, and I'm unaware of any problem with the operator's leasing mechanism. We also made available a set of Kustomization bases in operator/config. Do tell if any gap persists.
Please read the announcement blog post for more information: https://www.pulumi.com/blog/pulumi-kubernetes-operator-2-0/
Would love to hear your feedback! Feel free to engage with us on the #kubernetes channel of the Pulumi Slack workspace. cc @zelig81