ingress2gateway icon indicating copy to clipboard operation
ingress2gateway copied to clipboard

--input_file requires an available cluster, does not support yaml files with multiple resources, does not give errors

Open simonfelding opened this issue 1 year ago • 14 comments
trafficstars

What happened: I wanted to convert ingresses in a multi-yaml file usin --input_file, without access to the cluster.

./ingress2gateway print --input_file harbor.yaml resulted in Error: failed to create client: Get "http://localhost:6443/api?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused

because the cluster is airgapped and unavailable for my laptop. With connection to the cluster, it returns No resources found in default namespace.

What you expected to happen: ingress yamls are converted to httproutes.

How to reproduce it (as minimally and precisely as possible): helm template harbor oci://registry-1.docker.io/bitnamicharts/harbor --set exposureType=ingress -set ingress.core.ingressClassName=nginx -n harbor > harbor.yaml

./ingress2gateway print --input_file harbor.yaml

the above gives no meaningful output, but should either give an error or the expected output.

simonfelding avatar Jan 15 '24 14:01 simonfelding

related to https://github.com/kubernetes-sigs/ingress2gateway/pull/78

simonfelding avatar Jan 15 '24 14:01 simonfelding

Should be fixed by https://github.com/kubernetes-sigs/ingress2gateway/pull/128

Now for reading local files the k8sclient creation is skipped.

dpasiukevich avatar Jan 31 '24 11:01 dpasiukevich

@simonfelding I believe this issue is fixed now. Can you confirm?

LiorLieberman avatar Feb 08 '24 11:02 LiorLieberman

closing this as fixed. Feel free to reopen if needed.

LiorLieberman avatar Feb 18 '24 18:02 LiorLieberman

@LiorLieberman I have still this error with 0.2.0:

image

My only wish is to use the cli offline, on our gitops repository

davinkevin avatar Mar 29 '24 08:03 davinkevin

Hi @davinkevin, I guess the resources in your ingress.yaml file do not specify a namespace (and if they do, it isn't the default one). Please use -A (--all-namespaces) flag.

It should provide your expected result; please let us know if it isn't.

levikobi avatar Apr 05 '24 07:04 levikobi

Thank you for your answer @levikobi . Result is different but not really better.

image

My file is just this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: podinfo.ingress.k8s.local
spec:
  rules:
    - host: podinfo.ingress.k8s.local
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: podinfo
                port:
                  name: http

davinkevin avatar Apr 05 '24 07:04 davinkevin

@davinkevin are you sure you're using release 0.2? I can't seem to reproduce the error you're showing here TBH.. Neither on main

levikobi avatar Apr 05 '24 07:04 levikobi

image

I would say yes 🤔

davinkevin avatar Apr 05 '24 09:04 davinkevin

Hey @davinkevin the reason is that you did not specify ingressClassName in spec. So no provider pick it up.

Also, I think there is a bug when using port.name. Can you try to add ingressClassName and change port.name to port.number and report back if it works?

LiorLieberman avatar Apr 07 '24 20:04 LiorLieberman

Hello @LiorLieberman,

So, I've first tried with a specific ingressClassName and the change you mentioned:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: podinfo.ingress.k8s.local
spec:
  ingressClassName: nginx
  rules:
    - host: podinfo.ingress.k8s.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: podinfo
                port:
                  number: 1234

And it's working! The result is:

ingress2gateway print --input_file=ingress.yaml --providers=ingress-nginx -A
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  creationTimestamp: null
  name: nginx
spec:
  gatewayClassName: nginx
  listeners:
  - hostname: podinfo.ingress.k8s.local
    name: podinfo-ingress-k8s-local-http
    port: 80
    protocol: HTTP
status: {}
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  creationTimestamp: null
  name: podinfo.ingress.k8s.local-podinfo-ingress-k8s-local
spec:
  hostnames:
  - podinfo.ingress.k8s.local
  parentRefs:
  - name: nginx
  rules:
  - backendRefs:
    - name: podinfo
      port: 1234
    matches:
    - path:
        type: PathPrefix
        value: /
status:
  parents: []

If I use the named port, I indeed have an error:

image

So, the main remark I could do is, for newcomers, it's really a shame to not be able to use the tool on bare ingress files. Maybe a generic provider to manage ingress if all others didn't pick anything?

davinkevin avatar Apr 08 '24 06:04 davinkevin

I could be convinced the idea is good. Would you like to go ahead and contribute such a thing?

LiorLieberman avatar Apr 10 '24 05:04 LiorLieberman

Good, unfortunately, I don't think I'll get time to provide a quality PR for that support… #Sorry

However, I keep that in mind, in case I have more time for it, but I'm sure something we'll have done it before me 😅

davinkevin avatar Apr 16 '24 15:04 davinkevin

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 15 '24 16:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 14 '24 16:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Sep 13 '24 16:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Sep 13 '24 16:09 k8s-ci-robot