crd/ContourDeployment: Add field 'Ports'
add the field Ports to crd/ContourDeployment to enable set containerPort & nodePort for envoy service
Signed-off-by: Gang Liu [email protected]
Codecov Report
Merging #4705 (1126b3f) into main (01dc527) will decrease coverage by
0.06%. The diff coverage is39.13%.
Additional details and impacted files
@@ Coverage Diff @@
## main #4705 +/- ##
==========================================
- Coverage 76.50% 76.44% -0.07%
==========================================
Files 140 140
Lines 16772 16793 +21
==========================================
+ Hits 12832 12837 +5
- Misses 3690 3705 +15
- Partials 250 251 +1
| Impacted Files | Coverage Δ | |
|---|---|---|
| internal/provisioner/controller/gateway.go | 58.33% <39.13%> (-1.67%) |
:arrow_down: |
| internal/sorter/sorter.go | 97.95% <0.00%> (-1.03%) |
:arrow_down: |
Marking this PR stale since there has been no activity for 14 days. It will be closed if there is no activity for another 30 days.
Could you explain what you're trying to accomplish with this change? Need to consider how this relates to Gateway Listeners' concept of Ports, which define the port a client would use to connect to the Gateway.
the main reason is to let user can set the nodePort number for ServiceType: NodePort
xref https://github.com/projectcontour/contour/issues/4499
The Contour project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 14d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the PR is closed
You can:
- Mark this PR as fresh by commenting or pushing a commit
- Close this PR
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 14d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the PR is closed
You can:
- Mark this PR as fresh by commenting or pushing a commit
- Close this PR
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
https://github.com/kubernetes-sigs/gateway-api/issues/1061 is closed, so what's the plan about contour @skriss
@izturn ack, will try to revisit this in the coming weeks. If you could add some more detail on your use case, that'd be helpful.
- some of our users want to expose their services to the public, but they don't have the LB, so service type
NodePortis their choose, and compare to the generated IP, they like the static IP to fix port conflict and other problems. - it helps https://github.com/projectcontour/contour/issues/4746#issuecomment-1261612234 too
ping
@skriss @sunjayBhatia ping
The Contour project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 14d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the PR is closed
You can:
- Mark this PR as fresh by commenting or pushing a commit
- Close this PR
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
ping @skriss @sunjayBhatia
I'm not crazy about specifying some port info in ContourDeployment, because I think it belongs with the Listener logically. Right now, a user can change the Listener definitions on the Gateway, and Contour will update the Service definition accordingly. However, ContourDeployment changes typically do not result in changes to the underlying infra, due to the second paragraph here. That's just one illustration of why having port info stored in two different places can be problematic.
What if alternately, we did the following:
- we change the behavior described here today, to set the NodePort value to the Listener's PortNumber value when a NodePortService is requested.
- if we did the above, then we could document that when requesting a NodePort service, it's not possible to get auto-assigned ports due to limitations in the Gateway API spec itself. If this is problematic for folks, then we have some justification to change the API spec upstream.
cc @sunjayBhatia interested in your thoughts here.
Ok, I will make a new PR, @skriss thx
I'm not crazy about specifying some port info in
ContourDeployment, because I think it belongs with the Listener logically. Right now, a user can change the Listener definitions on the Gateway, and Contour will update the Service definition accordingly. However, ContourDeployment changes typically do not result in changes to the underlying infra, due to the second paragraph here. That's just one illustration of why having port info stored in two different places can be problematic.What if alternately, we did the following:
- we change the behavior described here today, to set the NodePort value to the Listener's PortNumber value when a NodePortService is requested.
- if we did the above, then we could document that when requesting a NodePort service, it's not possible to get auto-assigned ports due to limitations in the Gateway API spec itself. If this is problematic for folks, then we have some justification to change the API spec upstream.
cc @sunjayBhatia interested in your thoughts here.
makes sense to me, i think the auto-assigned port idea has come up upstream as well so will be good feedback to give if users need it
@izturn I am going to close this PR out and will keep an eye out for a new PR per https://github.com/projectcontour/contour/pull/4705#issuecomment-1371921283, thanks for the patience on this one!