pulumi-command
pulumi-command copied to clipboard
Implicitly delete commands on instance removal
Hello!
- Vote on this issue by adding a 👍 reaction
- If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
I would like to be able to define a relationship between a remote.Command and another resource (e.g. an Instance), so that the Command is implicitly deleted when the instance disappears. My problem is this sequence of steps:
- Create an
aws.ec2.Instancewith Pulumi - Create a
Command, withcreateanddeleteoptions on this instance - Delete the instance outside of Pulumi
- Run
pulumi refresh(detects the instance as removed, but not the command) - Run
pulumi up
In the last step, pulumi correctly tries to create the instance, but tries to replace the command. This involves running the delete step on the old instance, which no longer exists, so it fails.
I would like to be able to tell Pulumi that if the instance disappears during pulumi refresh, any command run on that instance should be considered deleted. This shouldn't be the default, since a command might create aribitrary resources and it might be correct to fail if it can't be cleaned up from the original instance. However, right now I have to delete a lot of commands from my stack manually whenever an instance is removed outside of Pulumi.
For some additional discussion, see this slack conversation.
Affected area/feature
@kwohlfahrt Can you provide a minimal-ish example of the command resource? I read the Slack convo, and I'm wondering if the solution might be to rewrite the delete on the command to something like:
if (instanceStillExists) {
// do something
}
Sure, it was something like this - this is pieced together from a few different files, so please excuse any inconsistencies:
// Work around https://github.com/pulumi/pulumi-command/issues/96
const key = new tls.PrivateKey("ssh", { algorithm: "ED25519" });
const publicKey = new aws.ec2.KeyPair("k8s", { publicKey: key.publicKeyOpenssh });
const instance = new aws.ec2.Instance("control-plane", {
keyName: publicKey.id,
...
})
const conn = const connection = {
host: masterNode.publicDns,
user: "ubuntu",
privateKey: key.privateKeyOpenssh,
};
// basically `kubeadm init`, plus some fiddling
const controlPlaneScriptPath = new URL("control-plane-init.sh", import.meta.url);
const controlPlaneScript = new remote.CopyFile(
"control-plane-init.sh",
{
connection,
localPath: url.fileURLToPath(controlPlaneScriptPath),
remotePath: "/home/ubuntu/control-plane-init.sh",
// work around https://github.com/pulumi/pulumi-command/issues/33
triggers: [
crypto
.createHash("sha256")
.update(await fs.readFile(masterScriptPath))
.digest("hex"),
],
},
{ deleteBeforeReplace: true }
);
const initCmd = pulumi.interpolate`sudo ${controlPlaneScript.remotePath}`;
const resetCmd = 'sudo sh -c "kubeadm reset --force && rm -rf /etc/cni/net.d"';
const clusterInit = new remote.Command(
"kubeadm",
{
connection,
create: initCmd,
delete: resetCmd,
update: pulumi.interpolate`${resetCmd} && ${initCmd}`,
triggers: [controlPlaneScript],
},
{
deleteBeforeReplace: true,
}
);
What would the instanceStillExists test look like?
Hi @kwohlfahrt
Thanks for providing that feedback - that will help us be able to understand the use case and be able to look at adding this. We will try and get to this in a future sprint
Paul