ko
ko copied to clipboard
Allow ko layers to stack cleanly
Sometimes I'd like an image with two go binaries built with ko
.
ko
currently adds uniquely named binaries to to /ko-app
folder but it dumps all kodata
into the same KO_DATA_PATH
. Thus one binary's kodata
could override the other.
If each binary had it's data in a unique place setting WorkDir could be one work around - https://github.com/google/ko/issues/55
Though I still think you would need a launcher/shim binary to set the correct WorkDir prior to running the executable we want.
If each binary had it's data in a unique place setting WorkDir could be one work around
This is container-global, so we couldn't set it differently for each binary.
If each binary had it's data in a unique place setting WorkDir could be one work around
This is container-global, so we couldn't set it differently for each binary.
yup
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
We can just reuse the same directory. It's only a problem if the kodata directories have conflicting files, which we can detect and error out. I imagine the number of situations where there are both conflicting files and it's impossible to fix by using subdirectories withink kodata would be quite small.
I'd be open to a PR that just detects a conflict?
This also seems like a totally reasonable thing to have config for, so that each target can define where it wants the kodata directory to end up on the filesystem.
Sometimes I'd like an image with two go binaries built with ko.
ko currently adds uniquely named binaries to to /ko-app folder but
Seeking clarification - @dprotaso, do you mean that ko currently supports building multi-binary images? Trying ko publish -L ./cmd/foo ./cmd/bar
just gets me two images, not one with /ko-app/foo
and /ko-app/bar
.
Or did you mean only that the current layer structure would allow for such a thing, but the (potential) kodata collision precludes supporting such a configuration?
Or did you mean only that the current layer structure would allow for such a thing, but the (potential) kodata collision precludes supporting such a configuration?
It's this. We haven't really considered building multibinary ko images, and kodata is the only real hard thing about it AFAIK.
Is there a sense yet what the UI would look like? Leaving out kodata for the moment.
Spitballing, but not knowing how other people (would) use this:
- A flag:
ko publish --single-image ./cmd/foo ./cmd/bar
- Add an imageName to each elt in
builds
:
builds:
- id: foo
main: ./cmd/foo
imageName: foo-and-bar
- id: bar
main: ./cmd/bar
imageName: foo-and-bar
- leave
builds
as is, but add a new top-levelimages
array (if ko aspires to mirror goreleaser's format, this is possibly cleaner than the previous bullet):
builds:
- id: foo
main: ./cmd/foo
- id: bar
main: ./cmd/bar
images:
- name: foo-and-bar
builds:
- foo
- bar
There isn't really an idea what the UX would be yet. We'd need to start with a clear use case, and multiple users asking for it.
Something like it should be possible today by hacking things together with some other tools like crane
; if you have a need for it I can help describe that better.
We'd need to see a clearer need for it to be built into ko itself since as you've noted it would need to change the UX quite a bit. So far ko has been designed to build single binary images, changing that would require more motivation.
:nod: Yeah, my interest here is because I'd like this myself. (For a monorepo case, which is ... definitely a departure from what ko does currently.) I've got a kinda-hacky local branch that builds a single image, but nothing yet re: UI/UX. (Or kodata; my static assets are provided by my base image.)
I'd love to hear more about crane
- I wasn't previously familiar with it, and on a very brief look I see lots about using it as a nicer docker-compose, but not really image building like this.
I'm curious about your use case. If there are multiple binaries in an image, which one is the entrypoint? Do you just intend to shell out from one binary to the other? Or do you need some kind of init process?
Kubernetes cluster - so the yml indicates which command is to be run for a given container.
We could do a bunch of single-binary images, but the multi-binary image means only having to manage one docker repo and one built artifact. It just fits better into our current build-package-deploy model. (Which, pre-k8s, was a single tarball plopped on a host, with varying systemctl units depending on which role the host was playing.)
Multi-binary also makes it easier to cover the use case of:
- ./cmd/foo is what the container runs, but
- i want to
kubectl exec
in to the running container and run a utility binary for some reason (inspecting the local filesystem, etc).
I am struggling to think of a way to fit this cleanly into ko
.
One options might be to create a chain of ko
builds that use another ko
build as a base image. I don't think we support this kind of thing yet, but there's precedent for it with https://github.com/google/ko/pull/371/files
We could drop into a new ko build if we find ko://
in the base image config. This isn't ideal because you serialize the builds for no reason, but it would work without being too disruptive (as long as you have disjoint kodata files).
If you don't need the yaml templating parts of ko
, you could do this with another tool... something like:
crane append -t repo.examle.com/merged \
-b $(ko publish ./foo) \
-f $(crane export $(ko publish ./bar) -) \
-f $(crane export $(ko publish ./baz) -)
This would use ./foo
as a "base", then build and flatten ./bar
and ./baz
's filesystems and append them as individual layers to ./foo
.
Yeah that's more or less what I was envisioning.
Since it's possible without modifying ko (though imperfectly) and not very clear how it would work in ko's UX, I'm inclined to not support this for the time being.
Given discussion with https://github.com/google/ko/issues/472 we could just allow users to set the entrypoint in container config and this seems actually pretty straightforward?
This actually makes a lot of sense if you want to ship a binary like grpc-health-probe for use in Kubernetes, although this beta feature in 1.24 kinda prevents the need for it.
Any updates on this request?