ingress2gateway
ingress2gateway copied to clipboard
--input_file requires an available cluster, does not support yaml files with multiple resources, does not give errors
What happened: I wanted to convert ingresses in a multi-yaml file usin --input_file, without access to the cluster.
./ingress2gateway print --input_file harbor.yaml
resulted in
Error: failed to create client: Get "http://localhost:6443/api?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused
because the cluster is airgapped and unavailable for my laptop. With connection to the cluster, it returns
No resources found in default namespace.
What you expected to happen: ingress yamls are converted to httproutes.
How to reproduce it (as minimally and precisely as possible):
helm template harbor oci://registry-1.docker.io/bitnamicharts/harbor --set exposureType=ingress -set ingress.core.ingressClassName=nginx -n harbor > harbor.yaml
./ingress2gateway print --input_file harbor.yaml
the above gives no meaningful output, but should either give an error or the expected output.
related to https://github.com/kubernetes-sigs/ingress2gateway/pull/78
Should be fixed by https://github.com/kubernetes-sigs/ingress2gateway/pull/128
Now for reading local files the k8sclient creation is skipped.
@simonfelding I believe this issue is fixed now. Can you confirm?
closing this as fixed. Feel free to reopen if needed.
@LiorLieberman I have still this error with 0.2.0:
My only wish is to use the cli offline, on our gitops repository
Hi @davinkevin, I guess the resources in your ingress.yaml file do not specify a namespace (and if they do, it isn't the default one). Please use -A (--all-namespaces) flag.
It should provide your expected result; please let us know if it isn't.
Thank you for your answer @levikobi . Result is different but not really better.
My file is just this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo.ingress.k8s.local
spec:
rules:
- host: podinfo.ingress.k8s.local
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: podinfo
port:
name: http
@davinkevin are you sure you're using release 0.2? I can't seem to reproduce the error you're showing here TBH.. Neither on main
I would say yes 🤔
Hey @davinkevin the reason is that you did not specify ingressClassName in spec. So no provider pick it up.
Also, I think there is a bug when using port.name. Can you try to add ingressClassName and change port.name to port.number and report back if it works?
Hello @LiorLieberman,
So, I've first tried with a specific ingressClassName and the change you mentioned:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: podinfo.ingress.k8s.local
spec:
ingressClassName: nginx
rules:
- host: podinfo.ingress.k8s.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: podinfo
port:
number: 1234
And it's working! The result is:
ingress2gateway print --input_file=ingress.yaml --providers=ingress-nginx -A
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
creationTimestamp: null
name: nginx
spec:
gatewayClassName: nginx
listeners:
- hostname: podinfo.ingress.k8s.local
name: podinfo-ingress-k8s-local-http
port: 80
protocol: HTTP
status: {}
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
creationTimestamp: null
name: podinfo.ingress.k8s.local-podinfo-ingress-k8s-local
spec:
hostnames:
- podinfo.ingress.k8s.local
parentRefs:
- name: nginx
rules:
- backendRefs:
- name: podinfo
port: 1234
matches:
- path:
type: PathPrefix
value: /
status:
parents: []
If I use the named port, I indeed have an error:
So, the main remark I could do is, for newcomers, it's really a shame to not be able to use the tool on bare ingress files.
Maybe a generic provider to manage ingress if all others didn't pick anything?
I could be convinced the idea is good. Would you like to go ahead and contribute such a thing?
Good, unfortunately, I don't think I'll get time to provide a quality PR for that support… #Sorry
However, I keep that in mind, in case I have more time for it, but I'm sure something we'll have done it before me 😅
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.