serving
serving copied to clipboard
Respect Kubernetes Local Registry ConfigMap KEP
See: https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
Not very familiar with knative, apologies if this is the wrong component etc.
Describe the feature
Kubernetes has a standard for communicating local registry discovery info to development tools.
This permits projects like minikube, kind, microk8s, etc to standardize providing information about how to access a local registry for development content, which may be necessary due to e.g. varying network reachability and configurations like https://github.com/containerd/containerd/blob/main/docs/hosts.md
I work on KIND and see users failing to resolve images when using knative, it appears this is due to attempting to resolve the image tags without being aware of how image tags will actually be resolved in the cluster.
With the configmap in KEP 1755 you can discover this information in a standard way for well configured development clusters without needing direct access to runtime resolution.
Otherwise, resolution should be left to the CRI runtime, which may even use a different transport or credentials than available to knative.
Otherwise, resolution should be left to the CRI runtime, which may even use a different transport or credentials than available to knative.
For context we want to bake the digest into the deployment & possibly in the future into the revision. K8s currently doesn't offer an API to perform tag to digest resolution so we have to do it ourselves but use the serviceaccount/imagepullsecrets
We do by default skip a bunch of local registries https://github.com/knative/serving/blob/6d26f54e8874988c3fe9e9d12632953a2b3772b8/config/core/configmaps/deployment.yaml#L47
With the configmap in KEP 1755 you can discover this information in a standard way for well configured development clusters without needing direct access to runtime resolution.
Few things aren't clear to be from the KEP
- the namespace
kube-public- who creates this namespace and manages RBAC - or is this something special I didn't know about local-registry-hostingconfig map - is there tooling to parse the config map property?
/triage needs-user-input
For context we want to bake the digest into the deployment & possibly in the future into the revision. K8s currently doesn't offer an API to perform tag to digest resolution so we have to do it ourselves but use the serviceaccount/imagepullsecrets
I will resist the urge to diverge the thread why this isn't a separate concern :-) Understanding that knative wants to do this, there is information available for how to succesfully reach the registry
We do by default skip a bunch of local registries
Would you consider localhost?
Though again, it doesn't need to skip resolution, it just needs to use the right address, and that information is available.
For local registries it can be tricky to reasonably use a single name, we don't want users to have to go configure DNS or something. Containerd offers pretty powerful configuration for how to actually interact with registries for a given host.
I imagine this will become more common at some point with a global airgapping cache now that there's support for a default hosts configuration.
the namespace kube-public - who creates this namespace and manages RBAC - or is this something special I didn't know about
It's not special. Cluster admin is responsible for reasonable RBAC etc.
local-registry-hosting config map - is there tooling to parse the config map property?
Good question. There's a top level versioned key with just a few simple yaml string fields, some of which are more user facing (like help). There's a sample Go struct in the KEP but I'm not sure if there's a common package for this.
cc @nicks.
I will resist the urge to diverge the thread why this isn't a separate concern :-) Understanding that knative wants to do this, there is information available for how to succesfully reach the registry
Presentation :) https://docs.google.com/presentation/d/e/2PACX-1vTgyp2lGDsLr_bohx3Ym_2mrTcMoFfzzd6jocUXdmWQFdXydltnraDMoLxvEe6WY9pNPpUUvM-geJ-g/pub?resourcekey=0-FH5lN4C2sbURc_ds8XRHeA#slide=id.p
Re: local-registry-hosting configmap -
There's a reference implementation here: https://github.com/tilt-dev/localregistry-go
But in practice, I think almost every implementor copied and pasted the spec code 😂 https://github.com/k3d-io/k3d/blob/1afe36033dc7f28479ce3de1e3c3c3efba772e6e/pkg/types/k8s/registry.go#L47
@nicks can we move the struct into a k8s-sigs repo or something?
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
@nicks did that struct go any central spot? cc @BenTheElder
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
/remove-lifecycle stale
@nicks thoughts on the struct?
ya, both the code in k8s/enhancements and in tilt-dev/localregistry-go should be licensed under apache2, so i don't see any IP reason why it can't live in k8s-sigs.
i don't know much about the administrative costs for repos under k8s-sigs (creation, approvals, governance, etc) and don't have a sense of whether that's worthwhile or not.
Primarily you need a SIG to agree to host it, there's a template issue type for requesting a repo which needs to go with approval from SIG Leads (typically brought up in the mailinglist and cross referenced with the github issue), then someone will stamp out a template repo or choreograph migrating in a donated repo. Donated repos have to be checked for license and copyright assignability.
https://github.com/kubernetes/org/issues/new?assignees=&labels=area%2Fgithub-repo&projects=&template=repo-create.yml&title=REQUEST%3A+%3CCreate+or+Migrate%3E+%3Cgithub+repo%3E
Ongoing overhead isn't high, and I think this makes sense, though it's a little unusual.
It also seems like https://github.com/tilt-dev/localregistry-go ought to be usable as-is though, and users having to manually configure this seems like an ongoing footgun for knative local dev. We could automate that away with support for this API (or something similar, but it seems like we should use the KEP-ed approach that other tools already understand ...)
lemme poke sig-cluster-lifecycle to see if they'd agree to host it, since i worked with them on the original kep
ah, the sig-cluster-lifecycle folks reminded me that we discussed this during the KEP process and decided it would not be worthwhile to create a repo for this. it's even in the non-goals section of the KEP! :joy: https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry#non-goals
current recommendation is to copy the struct or depend on the tilt-dev repo.
decided it would not be worthwhile to create a repo for this. it's even in the non-goals section of the KEP! 😂
That seems backwards - what if the format evolves? Where do people open issues/suggest improvements? etc.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
ah, the sig-cluster-lifecycle folks reminded me that we discussed this during the KEP process and decided it would not be worthwhile to create a repo for this.
this seems wrong and worth revisting
in the meantime: worth noting that multiple local cluster tools support this and this improves the user experience
we should start a thread somewhere upstream to discuss hosting the types, somewhere between SIG Arch and Cluster Lifecycle, I don't think the project should have "official" objects with no machine spec
That seems backwards - what if the format evolves? Where do people open issues/suggest improvements? etc.
how so? if the format evolves there will be a version iteration which will be added to the KEP. this is stated there. -1 to host the spec in a repository.
also the spec is YAML, the Go struct is just an example.
here are a couple of things that can be clarified in the KEP, if @nicks wishes to iterate with a PR:
- comments can be added to https://github.com/kubernetes/enhancements/issues/1755 or on SIG Cluster Lifecycle communication channels https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#contact
- licensing of the spec and embedded source code fall under the root LICENSE of the same repository - Apache 2.0
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.