pulumi-kubernetes
pulumi-kubernetes copied to clipboard
Patch a resource to add an element to an existing list
What happened?
I may not be using something right, but I didn't find anything related in the docs. When I try to patch an existing resource with an existing list to add a new item in that list, it works fine on the first run, but then on subsequent runs, that's where it goes wrong. Let's say a deployment exists with 2 volumes and I want to add a third volume. On the first run, the third volume is added. But then, if I try to run "pulumi up" after making 0 changes:
- it detects that there is an update on the patch resource, when looking into the details, it says that the patch is going to replace the volume at index 0 with the added volume.
- if I execute the update, it just hangs there until it times out with a message of the sort:
* the Kubernetes API server reported that "<namespace>/<deployemnt>" failed to fully initialize or become live: '<deployment>' timed out waiting to be Ready
* Attempted to roll forward to new ReplicaSet, but minimum number of Pods did not become live
What I think is going on is, it applies the patch then waits for the deployment to replace existing pods, but the thing is that the patch itself didn't make any changes, so the number of replicas never goes down from the initial value (then back up).
That behaviour is wrong, but there shouldn't have been an update to begin with. Am I doing something wrong ?
Example
Let's take an existing resource like coredns for example. if I apply this patch the second time, I get what I described (I'm using .net)
var coreDnsDeploymentPatch = new DeploymentPatch("coredns", new DeploymentPatchArgs
{
Metadata = new ObjectMetaPatchArgs { Namespace = "kube-system", Name = "coredns" },
Spec = new DeploymentSpecPatchArgs
{
Template = new PodTemplateSpecPatchArgs
{
Spec = new PodSpecPatchArgs
{
Volumes =
{
new VolumePatchArgs
{
Name = "my-new-volume",
ConfigMap = new ConfigMapVolumeSourcePatchArgs
{
DefaultMode = 420,
Name = "some-config-map",
Items =
{
new KeyToPathPatchArgs
{
Key = "key-1",
Path = "path-1"
}
}
}
}
},
Containers = new ContainerPatchArgs
{
Name = "coredns",
VolumeMounts =
{
new VolumeMountPatchArgs
{
Name = "my-new-mont,
ReadOnly = true,
MountPath = "/my-path"
}
}
}
}
}
}
});
Output of pulumi about
CLI Version 3.93.0 Go Version go1.21.3 Go Compiler gc
Plugins NAME VERSION command 0.9.2 dotnet unknown kubernetes 4.5.4 random 4.8.0
Host OS ubuntu Version 22.04 Arch x86_64
This project is written in dotnet: executable='/usr/bin/dotnet' version='6.0.125'
Current Stack: organization/my-stack/mystack-dev
TYPE URN pulumi:pulumi:Stack urn:pulumi:mystack-dev::mystack::pulumi:pulumi:Stack::mystack-mystack-dev kubernetes:apps/v1:DeploymentPatch urn:pulumi:mystack-dev::mystack::kubernetes:apps/v1:DeploymentPatch::coredns
Found no pending operations associated with mystack-dev
Backend Name [my-computer] URL file://~ User [my-account] Organizations Token type personal
Dependencies: NAME VERSION Mekit.Pulumi.Kubernetes.Crds 1.1.0 Pulumi 3.59.0 Pulumi.Command 0.9.2 Pulumi.Kubernetes 4.5.4 Pulumi.Random 4.8.0
Pulumi locates its logs in /tmp by default
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
Thanks for reporting this @Sshnyari. I believe I was able to reproduce this with just the volume patch as you described: https://github.com/mjeffryes/dotnetrepros/tree/pulumi-kubernetes-2682 We'll add this to our backlog.