harbor-helm
harbor-helm copied to clipboard
Harbor pushed images not showing in the ui
Hi all,
First of all I am aware this is an old already discussed issue. I followed any advice left in the following issues comments but I am still not able to display docker images in my Harbor portal using the Istio ingress controller
#1295 https://github.com/goharbor/harbor/issues/11906 #12804 https://github.com/goharbor/harbor/issues/12804 #539 #11906 https://github.com/goharbor/harbor/issues/11906
I am installing the latest version of Harbor with the provided helm-chart using the command below. The service is deployed in AWS EKS using an AWS nlb and Istio Gateway
Chart Install
helm install --wait harbor --namespace ops harbor/harbor \
--set expose.type=clusterIP \
--set expose.tls.enabled=true \
--set externalURL=https://harbor.hostname.co.uk \
--set internalTLS.enabled=false \
--set persistence.enabled=false \
--set harborAdminPassword=bitnami \
--set registry.relativeurls=false \
--set registry.credentials.username=test \
--set registry.credentials.password=test \
--set redis.type=internal \
--set expose.tls.secretName=harbor-tls-secret \
--set expose.tls.auto.commonName=harbor.hostname.co.uk
The Istio configuration is as follow. The ingress gateway is installed in the istio-system namespace together with a public Gateway which intercept any request to the cluster and forward it to all other Virtual Services
Gateway
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
istio-type: internal
name: public-gateway
namespace: istio-system
spec:
selector:
istio: gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: gateway-tls-secret
Virtual Service
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: harbor
spec:
gateways:
- istio-system/public-gateway
- mesh
hosts:
- harbor.hostname.co.uk
http:
- match:
- uri:
prefix: '/c/'
route:
- destination:
host: harbor-core
- match:
- uri:
prefix: '/api/'
route:
- destination:
host: harbor-core
- match:
- uri:
prefix: '/v2/'
route:
- destination:
host: harbor-registry
port:
number: 5000
- match:
- uri:
prefix: '/v1/'
route:
- destination:
host: harbor-registry
port:
number: 5000
fault:
abort:
httpStatus: 404
- match:
- uri:
prefix: '/service/'
route:
- destination:
host: harbor-core
- match:
- uri:
prefix: '/chartrepo/'
route:
- destination:
host: harbor-core
- route:
- destination:
host: harbor-portal
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: harbor-core
spec:
hosts:
- harbor-core
- harbor-core.harbor.svc.cluster.local
http:
- match:
- uri:
prefix: '/v2/'
route:
- destination:
host: harbor-registry
port:
number: 5000
- route:
- destination:
host: harbor-core
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: harbor-registry
spec:
hosts:
- harbor.hostname.co.uk
ports:
- number: 5000
name: http-registry
location: MESH_INTERNAL
resolution: NONE
I can login into the UI without any issue. However when I push something to the docker registry I don't see the new repositories show up in the ui. However I am able to pull the images down
Other things to note:
- I cannot docker push using the admin user credentials
- I login to the docker registry using the following
docker login --username test --password test https://harbor.hostname.co.uk
- It seems I cannot upload charts into the chartmuseum
Hi @githanium ,
It should not occurred when you are using latest harbor version if you can push into harbor successfully.
Are you saying you can login into the harbor registry using admin account but can be accessed by test
user?
Could you pls provide the the harbor-core
log? thx
Hi @MinerYang
I have updated my helm command as below:
helm install harbor --namespace ops bitnami/harbor \
--set service.type=ClusterIP \
--set persistence.enabled=false \
--set notary.enabled=false \
--set externalURL=harbor.domain.co.uk \
--set chartmuseum.enabled=false \
--set redis.master.persistence.existingClaim="harbor-redis" \
--set redis.master.persistence.storageClass="harbor-redis" \
--set redis.master.persistence.size="8Gi" \
--set postgresql.primary.persistence.existingClaim="harbor-database" \
--set postgresql.primary.persistence.storageClass="harbor-database" \
--set postgresql.primary.persistence.size="50Gi" \
--set nginx.tls.existingSecret=harbor-tls-secret \
--set nginx.tls.commonName=harbor.domain.co.uk \
--set registry.relativeurls=true \
--set registry.tls.existingSecret=harbor-tls-secret \
--set registry.server.image.debug=true
Please note I am using persistence volumes so some data might be preserved there from previous attempts and I am conscious this might cause some issues
I am still using the Istio VirtualService and Gateway as stated in the post above. The Gateway is secured using a certificate generated with cert manager and let's encrypt
In addition I have integrated Harbor with an OIDC provider (Keycloak) following this approach
I will list below what is working and what is not:
Working
-
I am able to login to the Harbor portal using admin and the generated password stored in the secret
-
I am able to login to the portal using OIDC
-
I am able to create new repositories in the portal and configure Harbor settings
-
All pods are healthy
-
I am able to login into the docker cli using the following
docker login harbor.domain.co.uk Username: harbor_registry_user Password: harbor_registry_password
-
I am able to push to docker using the following
docker push harbor.domain.co.uk/repo/test:1
-
I am able to pull the
test:1
docker image from another machine using the harbor_registry_user credentials
Not Working
- I am not able to see the uploaded test image into the repo project. Below how it looks
- I am not able to login to docker using the super-admin user which I use to login to the portal:
admin
- I am not able to login to docker using the oidc username and cli secret provided by the identity provider
Logs
harbor-core: This is happening all the time
2022-09-22T14:16:25Z [DEBUG] [/server/middleware/artifactinfo/artifact_info.go:54]: In artifact info middleware, url: /api/v2.0/ping
2022-09-22T14:16:25Z [DEBUG] [/server/middleware/security/unauthorized.go:28][requestID="fce38a2d-17fd-43ef-a78e-2970fc485333"]: an unauthorized security context generated for request GET /api/v2.0/ping
2022-09-22T14:16:25Z [DEBUG] [/server/middleware/log/log.go:30]: attach request id 4a4d503f-1797-4631-b4d4-10d25b0dce66 to the logger for the request GET /api/v2.0/ping
2022-09-22T14:16:25Z [DEBUG] [/server/middleware/artifactinfo/artifact_info.go:54]: In artifact info middleware, url: /api/v2.0/ping
2022-09-22T14:16:25Z [DEBUG] [/server/middleware/security/unauthorized.go:28][requestID="4a4d503f-1797-4631-b4d4-10d25b0dce66"]: an unauthorized security context generated for request GET /api/v2.0/ping
2022-09-22T14:16:26Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_groups_claim, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:26Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_admin_group, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:29Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_groups_claim, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:29Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_admin_group, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:32Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_groups_claim, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:32Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_admin_group, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:35Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_groups_claim, error: the configure value is not set, maybe default value not defined before get
2022-09-22T14:16:35Z [DEBUG] [/pkg/config/manager.go:140]: failed to get key oidc_admin_group, error: the configure value is not set, maybe default value not defined before get
harbor-core does not seem to produce logs when docker image is pushed......???
harbor-registry instead produces a lot of logs. Nothing unusual. The only error I can see is below
msg="response completed with error"
auth.user.name="harbor_registry_user"
err.code="blob unknown"
err.detail=sha256:2cdf3f451aae942a9c1d655346f7e99XXXXXXXXXXXXXXXXXXXXXXXX
err.message="blob unknown to registry"
go.version=go1.19
http.request.host=harbor.domain.co.uk
http.request.id=f64b4b77-2aa0-4ee6-b57d-02c05855e732 http.request.method=HEAD http.request.remoteaddr=X.X.X.X http.request.uri="/v2/repo/test/blobs/sha256:2cdf3f451aae942a9c1d655346f7e99XXXXXXXXXXXXXXXXXXXXXXXX" http.request.useragent="docker/20.10.17 go/go1.17.11 git-commit/a89b842 kernel/5.10.104-linuxkit os/linux arch/arm64 UpstreamClient(Docker-Client/20.10.17 \(darwin\))"
http.response.contenttype="application/json; charset=utf-8" http.response.duration=73.171181ms
http.response.status=404 http.response.written=157 vars.digest="sha256:2cdf3f451aae942a9c1d655346f7e99XXXXXXXXXXXXXXXXXXXXXXXX"
vars.name="repo/test"
We have same problems but with Helm charts
Pushed new version of helm chart to the first harbor, it replicated to second one, but on first it's not available in UI and helm search command. Only present in S3 bucket.
Harbor versions: 2.5.4
any help on this one, we are having the same problem with harbor version helm 1.10.1
This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.
This issue was closed because it has been stalled for 30 days with no activity. If this issue is still relevant, please re-open a new issue.