harbor icon indicating copy to clipboard operation
harbor copied to clipboard

no way to config multi-hostname for a single harbor instance

Open rbhuang opened this issue 4 years ago • 20 comments

Our scenario is we have deployed Harbor in our data center and we have internal and external hostnames for harbor instance. We are trying to use CI to push images to Harbor's internal hostname. Other systems may fetch the images from Habor through either internal or external hostnames. I found Harbor only support one hostname so I would like to seek your help to check how to config Harbor to support two hostname scenario. thanks rb

rbhuang avatar Jul 09 '19 02:07 rbhuang

I don't think of good resolution because before read/write to registry you need to fetch a token via the endpoint: https://github.com/goharbor/harbor/blob/master/make/photon/prepare/templates/registry/config.yml.jinja#L31

My very immature thought, maybe you can try to modify the config.yml of registry after installation of Harbor according to https://docs.docker.com/registry/configuration/ to enable htpasswd, and use htpasswd for internal.

reasonerjt avatar Jul 09 '19 06:07 reasonerjt

@xaleeks I suggest mark it as won't fix

reasonerjt avatar Aug 16 '19 07:08 reasonerjt

we have save scenario, we scp images to the harbor host first, then push to harbor

neo502721 avatar Oct 18 '19 06:10 neo502721

We also have a similar scenario, we need to expose Harbor both internally and externally. Internally for obvious reasons to push/pull images from clusters and with VPN. Externally because we use other cloud providers, and this external endpoint will be restricted by IPs. So we need 2 hostnames (one per LoadBalancer) but it seems that we hit the issue described by @reasonerjt (when doing docker login with the external domain it redirects to the internal one)

What would be the best way to achieve this ? I thought about creating another core deployment and service, pointing to the existing one, and exposing this deployement with the external ingress. But I'm not sure if it will create issues, like race conditions or other problems.

holinnn avatar Jun 18 '20 09:06 holinnn

Facing the same issue: We have an internal network (for the clusters to fetch the images) and an external network attached.

Using the Web UI from either network/domain works fine.

However, if the hostname is set to the domain leading to the internal network, then pushing replication from another external harbor instance fails:

2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:125]: client for destination registry [type: harbor, URL: https://harbor.ext, insecure: true] created
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:158]: copying ourproj/templateservice:[0.0.3](source registry) to destproj/templateservice:[0.0.3](destination registry)...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:179]: copying ourproj/templateservice:0.0.3(source registry) to destproj/templateservice:0.0.3(destination registry)...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:285]: pulling the manifest of artifact ourproj/templateservice:0.0.3 ...
 2020-06-11T10:49:48Z [INFO] [/replication/transfer/image/transfer.go:291]: the manifest of artifact ourproj/templateservice:0.0.3 pulled
 2020-06-11T10:49:48Z [*ERROR*] [/replication/transfer/image/transfer.go:299]: failed to check the existence of the manifest of artifact destproj/templateservice:0.0.3 on the destination registry: Get https://harbor.int/service/token?scope=repository%3Adestproj%2Ftemplateservice%3Apull&service=harbor-registry: dial tcp: lookup harbor.int on 10.10.10.10:53: no such host

Although it initially targets the harbor.ext domain, it then switches trying to contact harbor.int which of course will not work from an external network.

Likewise, setting the hostname to the external domain name leads to a working replication. However the cluster is no longer able to pull the images.

We will currently opt to manually pull the image from the external registry and manually push them to our registry.

Hopefully, there will be an option to use the registry over various networks with differing domain names in the future (or another solution).

Timoses avatar Jul 01 '20 10:07 Timoses

Note in regard to Kubernetes: When the Harbor instance is configured with domain name harbor.ext and Kubernetes is configured with an image from harbor.int then the Kubernetes node fails to pull the image (tested on a cluster running Docker as container runtime).

So it looks as though pulling from an instance through a domain (or network?) different from the one configured in the Harbor instance does not work (I did not conduct a specific test, just witnessed this occurring when I configured another domain in the Harbor instance).

Timoses avatar Jul 28 '20 21:07 Timoses

+1 same scenario here I deployed a harbor instance on my LAN server and expose it to Internet via aliyun which is billing by network traffic so I want to push image to harbor via a Intranet/LAN domain and pull images from a Internet domain (outside our LAN)

emeryao avatar May 10 '21 08:05 emeryao

+1

cobolbaby avatar May 24 '21 09:05 cobolbaby

+1

fly0512 avatar Jun 09 '21 08:06 fly0512

+1

qcu266 avatar Jun 12 '21 01:06 qcu266

Same use-case here: Different hostnames for internal and external access.

Why does harbor insist on using the configured hostname? Can't harbor just use the hostname of the current http request for constructing the redirect to the token endpoint?

ChristianCiach avatar Jul 21 '21 14:07 ChristianCiach

+1

marvindaviddiaz avatar Aug 20 '21 16:08 marvindaviddiaz

The domain set on core component in app.conf. You can use each domain for one core component and expose services for all core

shenshouer avatar Oct 13 '21 12:10 shenshouer

+1

withlin avatar Mar 30 '22 08:03 withlin

+1

jhanos avatar Apr 14 '22 08:04 jhanos

same issue:

I use this to install it:

helm install -n hub --create-namespace --set 'expose.type=nodePort,expose.tls.enabled=false,expose.nodePort.ports.http.nodePort=30002,expose.tls.commonName=.*,externalURL=http://.*,harborAdminPassword=adminadmin,secretKey=not-a-secure-key' -- hub-harbor harbor/harbor

but when I run docker login harbor.hub.svc.cluster.local , I get:

Username: admin
Password:
Error response from daemon: Get http://harbor.hub.svc.cluster.local/v2/: Get http://.*/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp: lookup .*: no such host

if change the .* to _ while install:

Username: admin
Password:
Error response from daemon: Get http://harbor.hub.svc.cluster.local/v2/: Get http://_/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp: lookup _ on 10.96.0.10:53: no such host

yhm-amber avatar Jun 15 '22 10:06 yhm-amber

Any update on this? A multi-domain option would make it also way easier to minimize downtime. Let's take an example: We have harbor running at harbor.domain.org. Now, we want to redeploy (for whatever reason) a new instance pointed to by harbor.domain.net. Virtually zero downtime can be achieved by having a CNAME record for a third domain harbor.domain.com which points to one of the other domains. Such a scenario can be useful when you have to create a new k8s cluster without the possibility of migrating your current state.

samox73 avatar Jul 05 '22 15:07 samox73

We have achieved access via multiple hostnames to a K8s hosted helm deployed Harbor instance, using the ingress expose option. We were able to do so by deploying a secondary K8s ingress based upon the one generated by the helm chart. We have a pair of nginx loadbalancers 'in-front' of our K8s cluster routing traffic to the Traefik entrypoints on the K8s cluster.

We can reach the WUI, and run docker push and pull commands using either hostname using this configuration.

Our environment setup is as follows:

Harbor Helm: v1.9.1 (Harbor v2.5.1) Kubernetes: v1.24 Ingress Provider: Traefik v2.7.1 Nginx: v1.21.5

An extract of our Harbor values.yaml file covering the pertinent detail:

externalURL: "harbor.cluster_name.service.domain"

expose: 
  type: ingress
  tls:
    certSource: secret
    secret:
      secretName: "harbor.cluster_name.service.domain"
  ingress:
    hosts:
     core: "harbor.cluster_name.service.domain"
    controller: default
    annotations:
      kubernetes.io/ingress.class: traefik
      traefik.ingress.kubernetes.io/router.entrypoints: websecure

registry:
  relativeurls: true

Our secondary ingress definition:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/proxy-body-size: "0"
    ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/ingress.class: traefik
    meta.helm.sh/release-name: harbor
    meta.helm.sh/release-namespace: harbor-namespace
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
  labels:
    app: harbor
    release: harbor
  name: harbor-ingress-secondary
  namespace: harbor-namespace
spec:
  rules:
  - host: harbor.domain
    http:
      paths:
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /api/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /service/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /v2
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /chartrepo/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /c/
        pathType: Prefix
      - backend:
          service:
            name: harbor-portal
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - harbor.domain
    secretName: harbor.cluster_name.service.domain

The NGINX server block to route traffic to our Traefik entrypoint. It is worth noting that the SSL certificates configured on the NGINX servers contain both domains (harbor.cluster_name.service.domain and harbor.domain) in their Subject Alternative Name definitions:

server {
  listen <loadbalancer_ip_address>:443 ssl http2;
  listen <loadbalancer_ip_address>:443 ssl http2;
  listen 443 ssl http2;
  status_zone kubernetes_cluster_name_https;
  server_name harbor.cluster_name.service.domain harbor.domain;
  keepalive_timeout 100;
  include /etc/nginx/ssl.conf;
  ssl_certificate /usr/share/nginx/ssl/harbor/cluster_name/server.crt;
  ssl_certificate_key /usr/share/nginx/ssl/harbor/cluster_name/server.key;
  server_tokens off;
  client_max_body_size 0;

  location / {
    proxy_pass "http://kubernetes_cluster_name_https";
    include /etc/nginx/proxy.conf;
    proxy_read_timeout  90s;
    proxy_set_header X-Forwarded-Proto https;
  }
}

Robbie558 avatar Jul 07 '22 14:07 Robbie558

Here is my workaround; First,add a map directive config to nginx-configuration config-map of nginx ingress controller.

data:
  http-snippet: |
    map $upstream_http_www_authenticate $modified{
      default '';
      "~^(Bearer realm=\"https://)({your internal host name})(.*)" "$1$host$3";
    }

Map directive can only be added to http context and nginx-configuration config-map is the only place I found where we can edit http context configuration.

Then,in the harbor ingress manifest,we add header overwrite logic:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_hide_header www-authenticate;
      add_header www-authenticate $modified always;

DougTea avatar Aug 08 '22 07:08 DougTea

Here is my workaround; First,add a map directive config to nginx-configuration config-map of nginx ingress controller.

data:
  http-snippet: |
    map $upstream_http_www_authenticate $modified{
      default '';
      "~^(Bearer realm=\"https://)({your internal host name})(.*)" "$1$host$3";
    }

Map directive can only be added to http context and nginx-configuration config-map is the only place I found where we can edit http context configuration.

Then,in the harbor ingress manifest,we add header overwrite logic:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_hide_header www-authenticate;
      add_header www-authenticate $modified always;

This did work for me! Thank you very much for sharing @DougTea!

antoffka avatar Aug 11 '22 02:08 antoffka

@qnetter Can you have a look at this?

AllForNothing avatar Dec 12 '22 08:12 AllForNothing

+1 I have a similar scenario running Harbor Multi-Region with DNS fail-over. I need one global hostname for doing the DNS fail-over and I need a second regional host-name to perform the AWS Route53 health-checks per region. Unfortunately seems like only one hostname is supported.

DimArmen avatar Jul 26 '23 19:07 DimArmen

+1

nueavv avatar Aug 10 '23 04:08 nueavv

+1...

hh831 avatar Aug 10 '23 04:08 hh831

+1

mddamato avatar Sep 20 '23 18:09 mddamato

+1

silverm0on avatar Sep 30 '23 17:09 silverm0on

+1

shelmingsong avatar Oct 13 '23 09:10 shelmingsong

Here is my workaround; First,add a map directive config to nginx-configuration config-map of nginx ingress controller.这是我的解决方法;首先,在nginx入口控制器的nginx-configuration config-map中添加map指令config。

data:
  http-snippet: |
    map $upstream_http_www_authenticate $modified{
      default '';
      "~^(Bearer realm=\"https://)({your internal host name})(.*)" "$1$host$3";
    }

Map directive can only be added to http context and nginx-configuration config-map is the only place I found where we can edit http context configuration.map 指令只能添加到 http 上下文中,nginx-configuration config-map 是我发现的唯一可以编辑 http 上下文配置的地方。 Then,in the harbor ingress manifest,we add header overwrite logic:然后,在 harbor 入口清单中,我们添加 header 覆盖逻辑:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_hide_header www-authenticate;
      add_header www-authenticate $modified always;

This did work for me! Thank you very much for sharing @DougTea!这确实对我有用!非常感谢您的分享!

Why didn't I use it and the wrong password was wrong

$ podman login harbor.xxx.com Authenticating with existing credentials for harbor.xxx.com Existing credentials are invalid, please enter valid username and password Username (harborAdmin): Password: Error: logging into "harbor.xxx.com": invalid username/password

lu-you avatar Mar 05 '24 03:03 lu-you

@lu-you check your nginx.conf in your ingress controller

DougTea avatar Mar 05 '24 03:03 DougTea

@lu-you check your nginx.conf in your ingress controller在入口控制器中检查 nginx.conf

kubectl get cm -n ingress-nginx ingress-nginx-controller -o yaml apiVersion: v1 data: allow-snippet-annotations: "true" http-snippet: | map $upstream_http_www_authenticate $modified{ default ''; "~^(Bearer realm="https://)({inter domain})(.*)" "$1$host$3"; }

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | meta.helm.sh/release-name: artifactory meta.helm.sh/release-namespace: skiff-artifactory nginx.ingress.kubernetes.io/configuration-snippet: | proxy_hide_header www-authenticate; add_header www-authenticate $modified always; nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" nginx.ingress.kubernetes.io/proxy-read-timeout: "600" nginx.ingress.kubernetes.io/proxy-send-timeout: "600" nginx.ingress.kubernetes.io/proxy-stream-timeout: "300" nginx.ingress.kubernetes.io/upstream-keepalive-timeout: "300"

please help me

lu-you avatar Mar 05 '24 03:03 lu-you