ingress2gateway
ingress2gateway copied to clipboard
feat: print all resources
What type of PR is this?
/kind feature
What this PR does / why we need it:
When reading the resources from a file, it is now possible to display all the resources, not only the Gateway API ones. Besides, the PROVIDER.md file has been properly formatted.
Which issue(s) this PR fixes:
Fixes #73
Does this PR introduce a user-facing change?:
By setting the flag `--all-resources`, it is now possible to print all the resources, not only the Gateway API ones. This flag is available only when reading the resources from a file.
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
My understanding is that this PR is driven by the idea that users will want to convert a yaml/json file, with a lot of objects (or a group of files when we get to support it). Then when reading from those files, instead of printing only the Gateway API resources - it will print all the resources plus the new GatewayAPI resources and without the resources that the Gateway API resources replaces.
100% accurate
If this is correct - this filtering logic feels a bit detached to the actual logic. I mean, lets take Kong as an example. If at Kong, UDPIngress and TCPIngress is replaced by the Gateway equivalents, my mind is when the provider is reading the CRD, it decides whether the Gateway resources replace it and based on this decide whether it prints it or not.
Yes, with this PR, the printing logic has been detached from the conversion logic. In the conversion logic for the CRDs, the output will be the corresponding Gateway API objects (let's say for a TCPIngress as input, the output will have Gateway+TCPRoute). In the printing logic, all the input objects are taken as input and after proper filtering displayed.
Do you see this flow as a problem?
Yes, with this PR, the printing logic has been detached from the conversion logic. In the conversion logic for the CRDs, the output will be the corresponding Gateway API objects (let's say for a
TCPIngressas input, the output will haveGateway+TCPRoute). In the printing logic, all the input objects are taken as input and after proper filtering displayed. Do you see this flow as a problem?
It is just that I think it should sit closer to the logic. Detaching them from each other is confusing imo. What benefits you see to keep them detached?
It is just that I think it should sit closer to the logic. Detaching them from each other is confusing imo. What benefits you see to keep them detached?
The fact is that they are separated logics, they do different things and are subsequent operations. How would you restructure such a piece of code? It's not clear to me 🤔
@LiorLieberman I've added again the client in the conf. PTAL
It is just that I think it should sit closer to the logic. Detaching them from each other is confusing imo. What benefits you see to keep them detached?
The fact is that they are separated logics, they do different things and are subsequent operations. How would you restructure such a piece of code? It's not clear to me 🤔
I think the provider specific logic should 1) read all the resources it cares about, regardless of whether it is from a file or from the cluster 2) replace with the equivalent GatewayAPI resources 3) output.
So the workflow for converting from a file:
- parse the file and construct a list of resources I am interested in (Ingresses + provider specific CRDs). all other objects unrelated objects - keep in a separate list to be used if
--all-resourcesflag is true - convert them to the Gateway equivalent
- print them, if
--all-resourcesalso print the that list we kept initially
We do not have any implementation that actually reads CRDs from cluster/file and convert them yet, maybe it better to revisit this debate once we have one.
It is just that I think it should sit closer to the logic. Detaching them from each other is confusing imo. What benefits you see to keep them detached?
The fact is that they are separated logics, they do different things and are subsequent operations. How would you restructure such a piece of code? It's not clear to me 🤔
I think the provider specific logic should 1) read all the resources it cares about, regardless of whether it is from a file or from the cluster 2) replace with the equivalent GatewayAPI resources 3) output.
So the workflow for converting from a file:
- parse the file and construct a list of resources I am interested in (Ingresses + provider specific CRDs). all other objects unrelated objects - keep in a separate list to be used if
--all-resourcesflag is true- convert them to the Gateway equivalent
- print them, if
--all-resourcesalso print the that list we kept initially
This is a good representation of the current workflow 👍
We do not have any implementation that actually reads CRDs from cluster/file and convert them yet, maybe it better to revisit this debate once we have one.
I plan to implement CRD conversion for Kong pretty soon. I think we can bring this discussion there, what do you think?
I plan to implement CRD conversion for Kong pretty soon. I think we can bring this discussion there, what do you think?
As we discussed offline, lets keep this one open and get back to it once you do the CRD implementation.
/hold
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: mlavacca
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [mlavacca]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
I've revamped this PR after a long time of inactivity. I drastically changed the initial approach after all the changes that have been merged. The PR is 95% complete, there is some documentation missing and some minor cleanup, but I'd like to have the idea validated before adding the last bits.
Please, folks, take another look :)
/cc @LiorLieberman @levikobi
@mlavacca you moved some logic from
commontoi2gw, can you please explain why?
I needed to move the resource-reader files to the i2gw package to avoid the import cycle as with this change we read resources not only from the provider packages but from i2gw as well, as we want to read resources that do not have to be converted.
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale