API adoption struggle
What happened: Gateway API is using CRDS rather then builtins. Its hindering adoption.
What you expected to happen: The community should support the gateway api
Anything else we need to know?: Please respond to https://github.com/helm/helm/pull/12912#issuecomment-2300635327 Helm really should have best practice recommendations for charts to use gateway api like it does ingress
This is an API, not an implementation, and there are already a list of helm charts in the ArtifactHub that seem related to Gateway API: https://artifacthub.io/packages/search?ts_query_web=gateway-api&sort=relevance&page=1 Each implementation of the API adds its own helm chart, which makes sense to me as good starting point for trying to standardize a helm chart.
Its not about packaging gateway api implementations.
Its about making it easy for packagers of applications to include gateway api objects to make it possible for end users to use the gateway api when deploying those apps.
So, you want to be able to install the CRDs using helm, because installing the default way is not easy ?
No CRDS. CR's. I want to be able to do the gateway api equivalent of (works today):
#install grafana and expose it via ingress
helm install grafana grafana --set ingres.enabled=true,ingress.hosts="grafana.example.org"
like, maybe:
#install grafana and expose it via the gateway api
helm install grafana grafana --set httpRoute.enabled=true,httpRoute.hosts="grafana.example.org"
and have the application actually exposed properly via the cluster-admin installed gateway.
For the referred issue, its enabling:
helm create myapp
to produce the httpRoute templated code automatically so new helm chart writers wouldn't have to plumb in the httpRoute objects themselves. Just like it does for Ingress objects.
This would encurage support of the gateway api in all new helm charts rather then keep them working Ingress only.
Thanks for raising this @kfox1111!
I responded on that PR, but I agree that we need a better way to handle this - this has been in my head for some time, but I haven't had much to suggest, so I've been letting it simmer.
My current thought is that we help with defining standard fields for charts to use for HTTPRoutes.
Probably something like this (to put it in values.yaml style):
httpRoute:
enabled: true # tbh I am not sure if this adds value by itself
hostname: somehostname.example.com # sets the _first_ hostname in the hostnames list
hostnames: # If you want to supply multiple - if `hostname` is also set, it will be prepended to the list
- someotherhostname.example.com
- thirdhostname.example.com
parentRef:
name: gatewayName
namespace: gatewayNamespace
group: # Group, version, and kind default to Gateway, but are there so people can set other things as parents.
version: # These could be omitted in a first implementation.
kind:
My questions have mainly been in how to organize and publish this. Do we publish this as part of Gateway API, or do we do it in Helm? The latter will require more buy-in from the Helm community, which may be valuable anyway.
Yeah, I think it would be most valuable if it was in helm. There 'helm create app' functionality deploys a web app out of the box, and exposing the web app outside the cluster is key functionality IMO. When I start a new chart, I usually start there.
If we can't solve this, at least having documentation/standards on the gateway-api side would still be valuable.
The big issue here I think is what is vanilla k8s?
There's several factors here that have made this problem weird:
- api-machinery wants to push things out to crds. Even key k8s functionality these days
- gateay-api is out of tree
- gateway-api is part of a major k8s sig
- ingress is in tree. Kind of grandfathered in.
- gateway-api was to replace ingress but ingress is in tree, beta, and in tree long enough people were using it so folks decided to make it a v1, while still intending gateway to replace it
So, is gateway-api vanilla k8s? I'd say yes, but others would say no. Which is it? How do we reach agreement on it so projects like helm that assert only supporting vanilla out of the box would accept it?
I can think of another parallel too. CSI Snapshot support is done as a CRD. I'd argue its also vanilla.
I've actually submitted a CFP to Contributor Summit in Salt Lake City to talk about this and similar issues, let me report back here after that.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen
lets leave this open until it actually merges
@kfox1111: Reopened this issue.
In response to this:
/reopen
lets leave this open until it actually merges
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
I think this merged the other day. just adding here for posterity.