ingress2gateway
ingress2gateway copied to clipboard
ingress-contour Support
contour projectcontour.io/ingress/ingress-contour-contour What would you like to be added: We would like to migrate Countour-envoy provider , is not listed on the existing 3 Why this is needed: this will help us migrating a lot of our existing
We are getting error `...go/bin/ingress2gateway print -n backend 6s Error: failed to read istio resources from the cluster: failed to read resources from cluster: failed to read gateways: failed to list istio gateways: failed to get API group resources: unable to retrieve the complete list of server APIs: networking.istio.io/v1beta1: the server could not find the requested resource Usage: ingress2gateway print [flags]
Flags: -A, --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace. -h, --help help for print --input_file string Path to the manifest file. When set, the tool will read ingresses from the file instead of reading from the cluster. Supported files are yaml and json -n, --namespace string If present, the namespace scope for this CLI request -o, --output string Output format. One of: (json, yaml) (default "yaml") --providers strings If present, the tool will try to convert only resources related to the specified providers, supported values are [ingress-nginx istio kong apisix] (default [kong,apisix,ingress-nginx,istio])`
The error your are getting is related to https://github.com/kubernetes-sigs/ingress2gateway/issues/138
In the meantime, you can solve it by specifying the provider/s you need with the flag.
/cc @sunjayBhatia for contour support request
Hi @LiorLieberman @sunjayBhatia Countour is not in the list of supported providers , so we can not select it , this is for adding that provider to the supported ones
To add the provider for the supported providers, someone need to implement the provider specific logic. Thats the reason I cc'd @sunjayBhatia
Regardless of contour support, I explained why you are getting an error.
Yep, I haven't had a lot of bandwidth to implement contour support in ingress2gateway, but if there is a willing contributor I would definitely help shepherd the changes!
Yep, I haven't had a lot of bandwidth to implement contour support in ingress2gateway, but if there is a willing contributor I would definitely help shepherd the changes!是的,我没有足够的带宽来在 ingress2gateway 中实现轮廓支持,但如果有愿意的贡献者,我肯定会帮助引导这些更改!
@sunjayBhatia @LiorLieberman hi, i want to give it a try, can u tell me how to finish it?
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.