tilt-extensions
tilt-extensions copied to clipboard
`helm_resource` with `docker_build_with_restart`
Mikael in the tilt channel reports that when they have a Tiltfile like this, the container doesn't restart properly:
load('ext://restart_process', 'docker_build_with_restart')
compile_cmd = 'dotnet publish -c {} -o ./bin/out ./Application.csproj'.format(build_conf);
local_resource(
"application-build",
compile_cmd,
deps=["."],
ignore=['obj', 'bin', '**/out', '**/.tiltignore'],
)
docker_build_with_restart(
ref='application',
context='.',
entrypoint=['dotnet', 'Application.dll'],
dockerfile='Dockerfile',
only=[
'/bin/out'
],
live_update=[
sync('bin/out', '/app')
]
)
helm_resource(
name= "application",
deps= ['application'],
chart= "helm",
image_deps= ['application'],
image_keys= ['service.image'],
release_name= "application",
namespace= "default",
flags= ['--values', '../values.yaml'],
resource_deps= ['application-build']
)
i think this might have something to do with the interop between helm_resource and docker_build_with_restart, but am not sure yet
I've reproduced this in servantes, with just k8s_custom_deploy and docker_build_with_restart.
Nick's hunch seems pretty accurate:
docker_build_with_restartworks by settingImageMap.Spec.OverrideCommand.- info only flows from
ImageMaptok8s_custom_deployvia the$TILT_IMAGEenv vars, but those do not contain theOverrideCommand, so there's no way fork8s_custom_deployto know.
I got live update to work in this situation by overriding the image's entrypoint with docker_build_sub like this:
load('ext://restart_process', 'docker_build_with_restart')
load('ext://docker_build_sub', 'docker_build_sub')
k8s_custom_deploy(
'fortune-depl',
apply_cmd='m4 -Dvarowner="$(whoami)" deploy/fortune.yaml | sed -e"s|image: fortune|image: ${TILT_IMAGE_0}|" | kubectl apply -f - -oyaml',
delete_cmd='m4 -Dvarowner="$(whoami)" deploy/fortune.yaml | kubectl delete -f -',
image_deps=['fortune'],
deps='deploy/fortune.yaml')
docker_build_sub(
'fortune-with-entrypoint',
'fortune',
extra_cmds=['ENTRYPOINT /tilt-restart-wrapper --watch_file=/tmp/.restart-proc --entr_flags=-r /go/bin/fortune']
)
docker_build_with_restart('fortune', 'fortune', '/go/bin/fortune',
dockerfile_contents='FROM fortune-with-entrypoint',
live_update=[
sync('fortune', '/go/src/github.com/tilt-dev/servantes/fortune'),
run('cd src/github.com/tilt-dev/servantes/fortune && make proto'),
run('cd src/github.com/tilt-dev/servantes/fortune && go install .'),
]
)
This has two pretty big downsides:
- The entrypoint requires internal knowledge of
docker_build_with_restart, and won't behave according to its arguments (e.g.,exit_policy,restart_file) without manual modification - If the
ContainerSpechas acommand, then the image's entrypoint is ignored and things still magically fail to work.
we can solve (1) by baking the behavior into docker_build_with_restart.
I've been noodling on this and haven't yet come up with a solution I like, so here's some spitballing for now:
a) In the kubernetesapply reconciler, if we have a custom apply and the image has an OverrideCommand, fail with a link to docs suggesting workarounds. Not the nicest experience, but minimizes the chances of "this is mysteriously not working".
b) rather than simply changing the docker image's entrypoint, rename and replace the existing entrypoint binary with tilt-restart-wrapper - this would allow a ContainerSpec that specifies a command to still get the restart behavior, as long as it's invoking the same binary. My hunch is this is too invasive and will break some things.
c) just have docker_build_with_restart set the image's entrypoint and make sure the docs caveat the command problem
i) We could also have the reconciler check if the applied yaml has a ContainerSpec with a command
d) in the reconciler, if the image has an OverrideCommand, after applying the yaml, immediately edit the yaml to stick the OverrideCommand in there (and if we can't find a ContainerSpec with a matching image, e.g., because it's a CRD, then fail). This might be good. There's a risk that the apply command creates a CRD that ensures there's a Deployment w/ a specified command, and immediately after we edit the Deployment, the CRD's reconciler reverts the change. I'd think in that case, the apply command wouldn't be returning the Deployment as part of its yaml.
fwiw - i'm not too worried about problem (2) at this point (the original reporter confirmed this wasn't a problem for them)
slight modification on (d) - rather than applying the yaml, maybe add a warning if the KubernetesDeploy reconciler sees a Pod overriding the command on a k8s_custom_deploy?
fwiw - i'm not too worried about problem (2) at this point (the original reporter confirmed this wasn't a problem for them)
I wasn't super worried about it, either, until I implemented my workaround and it took me a few minutes to figure out why it wasn't working :)
slight modification on (d) - rather than applying the yaml, maybe add a warning if the KubernetesDeploy reconciler sees a Pod overriding the command on a k8s_custom_deploy?
yeah, that seems good
Could it be that helm_resource together with docker_build also leads into this issue when trying to use the parameter entrypoint of docker_build?
Could it be that
helm_resourcetogether withdocker_buildalso leads into this issue when trying to use the parameterentrypointofdocker_build?
I'm using custom_build (with entrypoint parameter) and helm_resource together and the entrypoint is lost after deploying the helm application.
I tried it with
df = '''
FROM {image}
ENTRYPOINT {entry}
'''.format(image=image, entry=entrypoint)
docker_build(ref, context, entrypoint=entrypoint, dockerfile_contents=df)
and helm_resource together and that works.