harbor-helm icon indicating copy to clipboard operation
harbor-helm copied to clipboard

Ambiguity in Service Routing and TLS Termination When Deploying Harbor Helm Chart Behind External TLS Termination

Open AlverezYari opened this issue 11 months ago • 4 comments

Preface:

Note: It’s possible that deploying Harbor with TLS termination entirely handled at the load balancer (or Gateway API) level is not fully supported. However, the documentation implies that this should work, leading to significant ambiguity. I also attempted using a traditional Ingress object to offload TLS termination, but it did not behave as expected.

Description:

I'm experiencing issues when deploying Harbor using the official Helm chart (version v1.16.2) in a Kubernetes cluster, specifically when offloading TLS termination to an external mechanism (via Gateway API or traditional Ingress). While my Gateway and other applications (e.g., ArgoCD and Grafana) work correctly under this model, Harbor does not respond as expected.

Environment:

Kubernetes Version: 1.20+

Helm Version: v3.2.0+

Harbor Chart Version: v1.16.2

Gateway API Controller: Cilium Gateway integration (Gateway configured with HTTPS listeners) + Cilium Ingress

DNS: harbor.private.domain resolves correctly to the Gateway external IP

Configuration Details:

In my Harbor Helm values, I set:

Copy
expose:
  type: clusterIP
  tls:
    enabled: false
externalURL: https://harbor.private.domain

This should deploy Harbor with a plain HTTP (ClusterIP) service, expecting an external TLS terminator to handle HTTPS.

My Gateway API configuration includes an HTTPS listener (for harbor.private.domain) and an HTTPRoute with a backend reference targeting the Harbor service. However, Harbor is composed of multiple services (core and portal), and the documentation doesn't clearly state which service should be targeted for external traffic (UI login vs. API calls).

For example, my current HTTPRoute (intended for UI access) is:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: harbor-route
  namespace: gateways
spec:
  parentRefs:
    - name: tls-gateway   # Gateway with HTTPS listeners
      sectionName: https-3  # Listener dedicated to harbor.private.domain
      namespace: gateways
  hostnames:
    - harbor.private.domain
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: harbor-portal  # Targeting the portal service for UI
          namespace: harbor
          port: 80

While this configuration seems logical (routing / to the portal for UI access), it’s unclear from the chart documentation or defaults whether this is the intended setup. The chart appears to create multiple services on port 80 (e.g., core and portal), and the decision on which one to route to is ambiguous.

Issues Encountered:

Ambiguity in Target Service: The chart deploys multiple services on port 80 (Harbor core and Harbor portal). It is not clear from the documentation which service should be targeted for external UI access when Harbor is deployed behind an external TLS terminator.

TLS Offload Inconsistency: Both when using the Gateway API and when attempting a traditional Ingress object for TLS termination, Harbor does not seem to offload TLS termination as expected. Despite proper DNS resolution and Gateway configuration, external HTTPS requests are not handled correctly.

Documentation Gaps: Although the Harbor Helm chart documentation provides guidelines for different exposure types (ingress, clusterIP, nodePort, loadBalancer), it doesn’t clarify the proper routing when using an external TLS terminator. Additionally, default ingress annotations and internal TLS settings may cause conflicts if not fully disabled.

Expected Behavior:

I would expect:

Clear documentation on which Harbor service (core vs. portal) should be targeted for external UI/API access when deploying Harbor behind an external TLS terminator.

An option (or a set of documented configuration overrides) that fully disables Harbor’s internal TLS/ingress configuration, making it straightforward to deploy Harbor behind an external TLS terminator.

Consistent external behavior where accessing https://harbor.private.domain routes traffic correctly to the Harbor UI and API.

Steps to Reproduce:

Deploy Harbor using the Helm chart with the above configuration.

Configure an external TLS terminator via Gateway API (or traditional Ingress) with an HTTPS listener for harbor.private.domain.

Create an HTTPRoute (or Ingress) targeting the Harbor service (e.g., harbor-portal on port 80).

Attempt to access Harbor via HTTPS.

Additional Context:

Other applications (ArgoCD, Grafana) deployed using similar external TLS termination setups work as expected. DNS resolves correctly, and the Gateway logs indicate that the listener for Harbor is programmed. The issue appears isolated to Harbor’s internal configuration and how the chart deploys multiple services on the same port without clear guidance on routing external traffic.

Conclusion:

Could you please clarify:

Is offloading of TLS completely to a proxy service actually supported by this chart?

Which service should be targeted for external UI/API access when deploying Harbor behind an external TLS terminator?

Are there additional recommended overrides for using the Gateway API (or traditional Ingress) with Harbor (or known issues in v1.16.2) that I should be aware of?

Any insights or documentation updates would be greatly appreciated. Thanks!

AlverezYari avatar Apr 05 '25 14:04 AlverezYari

Hi @AlverezYari ,

There should be a Nginx pod/service handle all the backend routing for you from harbor side. It could be verified like this e.g externalURL: http://harbor.<namespace>.svc.cluster.local.

$ kubectl run -it test-pod --image=photon:5.0 --restart=Never --rm
If you don't see a command prompt, try pressing enter.
root [ / ]# curl http://harbor.default.svc.cluster.local/api/v2.0/systeminfo
{"auth_mode":"db_auth","banner_message":"","harbor_version":"v2.12.1-80219b7d","oidc_provider_name":"","primary_auth_mode":false,"self_registration":false}

So I guess in your actual scenario,you could config your ownexternalURL on demand and then set the route rule at your API gateway to the Nginx service directly.

MinerYang avatar Apr 07 '25 07:04 MinerYang

Hi @AlverezYari ,

There should be a Nginx pod/service handle all the backend routing for you from harbor side. It could be verified like this e.g externalURL: http://harbor.<namespace>.svc.cluster.local.

$ kubectl run -it test-pod --image=photon:5.0 --restart=Never --rm
If you don't see a command prompt, try pressing enter.
root [ / ]# curl http://harbor.default.svc.cluster.local/api/v2.0/systeminfo
{"auth_mode":"db_auth","banner_message":"","harbor_version":"v2.12.1-80219b7d","oidc_provider_name":"","primary_auth_mode":false,"self_registration":false}

So I guess in your actual scenario,you could config your ownexternalURL on demand and then set the route rule at your API gateway to the Nginx service directly.

So I would deliver my traffic to the nginx-pod on 80, having terminated my TLS via Cilium, and it should be good to go?

AlverezYari avatar Apr 07 '25 17:04 AlverezYari

I've also set up harbour using a httproute. The issue I found is that there is no way to disable the nginx service when using clusterIP which is a pain.

I used the following for my http route which I derived from looking at the ingress rules:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: harbor
  namespace: harbor
spec:
  parentRefs:
    - name: public
      namespace: infra
  hostnames:
    - registry.mydomain
  rules:
    - matches:
        - path:
            value: /v2/
        - path:
            value: /service/
        - path:
            value: /c/
        - path:
            value: /api/
      backendRefs:
        - name: harbor-core
          port: 80
    - matches:
        - path:
            value: /
      backendRefs:
        - name: harbor-portal
          port: 80

sorrison avatar Apr 28 '25 11:04 sorrison

I have a working setup similar to @sorrison's using Gateway API to provide external access, but the original idea of having Nginx to handle all the backend routing doesn't seem ideal (at least when you're not allowed to disable it).

If expose.type is not ingress, we might end up with one of these two scenarios:

  1. Proxy over proxy: run a proxy to forward requests to another proxy before reaching the backend services.
  2. Unnecessary resources created: ok, we could set nginx.replicas: 0 to avoid running pods, but all other resources created (including the deployment) would still sit there, doing nothing.

laurodias avatar May 28 '25 15:05 laurodias

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

github-actions[bot] avatar Jul 28 '25 09:07 github-actions[bot]

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

github-actions[bot] avatar Oct 10 '25 09:10 github-actions[bot]

This issue was closed because it has been stalled for 30 days with no activity. If this issue is still relevant, please re-open a new issue.

github-actions[bot] avatar Nov 10 '25 09:11 github-actions[bot]