gateway-api icon indicating copy to clipboard operation
gateway-api copied to clipboard

Is it possible to access a different service with a port from the same hostname?

Open oliverpark999 opened this issue 9 months ago • 2 comments

Hi,

  • Gateway-API NLB Listeners Port 80, 800, 8888

  • Gateway-API Manifest Port

spec:
  gatewayClassName: istio
  listeners:
    - allowedRoutes:
        namespaces:
          from: All
      hostname: '*.foobar'
      name: http
      port: 80
      protocol: HTTP

    - allowedRoutes:
        namespaces:
          from: All
      hostname: '*.foobar'
      name: foobar-b
      port: 800
      protocol: HTTP

    - allowedRoutes:
        namespaces:
          from: All
      hostname: '*.foobar'
      name: foobar-c
      port: 8888
      protocol: HTTP
- httpRoute
{...}
hostnames:
- test.frist.second.foobar
{...}
service:
  type: ClusterIP
  ports:
    - port: 80
      protocol: TCP
      name: foobar-a
      targetPort: 8000
    - port: 800
      protocol: TCP
      name: foobar-b
      targetPort: 800
    - port: 8888
      protocol: TCP
      name: foobar-c
      targetPort: 8888
  • test.frist.second.foobar/foobar 접속 => Connect traffic to 80 service

  • test.frist.second.foobar:800/foobar 접속 => Connect traffic to 800 service

  • test.frist.second.foobar:8888/foobar 접속 => Connect traffic to 8888 service

Even if I call it by specifying the listeners Port in hostname 'test.frist.second.foobar', it is connected to the same 80 service. Is it possible to access a different service with a port from the same hostname?

oliverpark999 avatar Mar 13 '25 03:03 oliverpark999

That should be possible. Hard to say if you did something wrong or not since you didn't include the routes.

howardjohn avatar Mar 13 '25 15:03 howardjohn

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 11 '25 16:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 11 '25 16:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 10 '25 17:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 10 '25 17:08 k8s-ci-robot