kaniko
kaniko copied to clipboard
unable to push to Harbor
Actual behavior I am using Kaniko in a Tekton pipeline to push images to my Harbor registry, but I can't push without being logged in.
Expected behavior I would like to know if there is a way to provide the registry username and password to kaniko in order to do a "docker login my.registry.harbor.net" before trying to push. Thank you !
Triage Notes for the Maintainers
Description | Yes/No |
---|---|
Please check if this a new feature you are proposing |
|
Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
Please check if your dockerfile is a multistage dockerfile |
|
Harbor registry is based on Docker distribution. You can authenticate to Harbor registry the same way you authenticate to any registry that uses Docker distribution afaik (e.g., DockerHub). https://github.com/GoogleContainerTools/kaniko#pushing-to-different-registries
You can configure the path to the directory containing the config.json using the DOCKER_CONFIG environment variable. You should create a secret containing .data.config.json and mount it to the path configured in DOCKER_CONFIG.
Example manifest snippet for container:
container:
image: gcr.io/kaniko-project/executor:v1.9.1
env:
- name: DOCKER_CONFIG
value: /.docker
command:
- /kaniko/executor
args:
- --context=...
- --dockerfile=...
- --destination=...
volumeMounts:
- name: config-json
mountPath: /.docker
readOnly: true
volumes:
- name: config-json
secret:
secretName: config-json
defaultMode: 0400
Example secret:
apiVersion: v1
data:
config.json: e2F1dGhzOntoYXJib3IuZXhhbXBsZS5jb206e3VzZXJuYW1lOnJvYm90JHJlcG9wdXNoLHBhc3N3b3JkOnBVb2tDcW94ekFMNzZab05YZ2tTM1E1VHU0OTk5UWZDLGF1dGg6Y205aWIzUWtjbVZ3YjNCMWMyZzZjRlZ2YTBOeGIzaDZRVXczTmxwdlRsaG5hMU16VVRWVWRUUTVPVGxSWmtNPX19fQ==
kind: Secret
metadata:
name: config-json
type: Opaque
Example base64 decoded config.json:
{"auths":{"harbor.example.com":{"username":"robot$repopush","password":"pUokCqoxzAL76ZoNXgkS3Q5Tu4999QfC","auth":"cm9ib3QkcmVwb3B1c2g6cFVva0Nxb3h6QUw3NlpvTlhna1MzUTVUdTQ5OTlRZkM="}}}
@joseph-arturia were you able to end up pushing to harbor using the information in the comment above?
i tried as @jameshearttech but still could not able to push image to harbor
@Queetinliu are you getting an error?
yes,i have a harbor,i get error ' error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "10.1.12.12:30002/dao-2048/dao-2048:latest": POST http://10.1.12.12:30002/v2/dao-2048/dao-2048/blobs/uploads/: UNAUTHORIZED: unauthorized to access repository: dao-2048/dao-2048, action: push: unauthorized to access repository: dao-2048/dao-2048, action: push'. i do not know how to fix it.i try many methods but still get error
@Queetinliu are you authenticated to Harbor? What account are you authenticated with? Does the account have push permissions for that project?
@jameshearttech yes,i can use the account to login to the registry and push image to the registry with docker and nerdctl command,but i always get this error with kaniko,i had to say it's difficult to use kaniko,how to fix it?
@Queetinliu in that case I'd check the config.json and the container config. If you can share those files I'm happy to take a look.
i build and push image successfully at last,i must create file as you said@jameshearttech and must add the environment which the document not mentioned, and with http harbor i must add --insecure and --skip-tls-vertify.i had to say the document and error log isn't friendly.this is a good tool,but need better document and examples for newbie.
I have the same problem as @Queetinliu . @Queetinliu can you give some hints/insides how you managed to sucessfully push to a private harbor repository?
Here is my current setup:
I am using a a Harbor on Kubernetes that is exposed over NodePort 30002 (http) and I am currently struggling to push container images to it.
I am trying to use the kaniko-"Standard Input"-Method.
-
Login into web-ui of harbor
-
create a new "Robot automation" account (apply for all projects and unlimited duration)
-
Example username:
robot$automation
and generated Secret:3TXjtN7fLYuOKjl2phB2fmhKdTWrCOEs
-
generate the auth key for the
config.json
file:
echo -n robot$automation:3TXjtN7fLYuOKjl2phB2fmhKdTWrCOEs | base64
Example Output:
cm9ib3Q6M1RYanRON2ZMWXVPS2psMnBoQjJmbWhLZFRXckNPRXM=
- Now encode the following json with base64
config.json
file:
echo -n '{"auths":{"192.168.1.97:30002":{"username":"robot$automation","password":"3TXjtN7fLYuOKjl2phB2fmhKdTWrCOEs","auth":"cm9ib3Q6M1RYanRON2ZMWXVPS2psMnBoQjJmbWhLZFRXckNPRXM="}}}' | base64 -w 0
Example Output:
eyJhdXRocyI6eyIxOTIuMTY4LjEuOTc6MzAwMDIiOnsidXNlcm5hbWUiOiJyb2JvdCRhdXRvbWF0aW9uIiwicGFzc3dvcmQiOiIzVFhqdE43ZkxZdU9LamwycGhCMmZtaEtkVFdyQ09FcyIsImF1dGgiOiJjbTlpYjNRNk0xUllhblJPTjJaTVdYVlBTMnBzTW5Cb1FqSm1iV2hMWkZSWGNrTlBSWE09In19fQ==
- create a Kubernetes Secret (yaml-file). Paste the long base64 string from the previous command in this command:
cat << EOF > kaniko-harbor-secret.yaml
apiVersion: v1
data:
config.json: eyJhdXRocyI6eyIxOTIuMTY4LjEuOTc6MzAwMDIiOnsidXNlcm5hbWUiOiJyb2JvdCRhdXRvbWF0aW9uIiwicGFzc3dvcmQiOiIzVFhqdE43ZkxZdU9LamwycGhCMmZtaEtkVFdyQ09FcyIsImF1dGgiOiJjbTlpYjNRNk0xUllhblJPTjJaTVdYVlBTMnBzTW5Cb1FqSm1iV2hMWkZSWGNrTlBSWE09In19fQ==
kind: Secret
metadata:
name: config-json
type: Opaque
EOF
- Apply yaml to Kubernetes:
sudo kubectl apply -f kaniko-harbor-secret.yaml
- Run Kaniko-"Standard-Input" Script:
echo -e 'FROM alpine \nRUN echo "created from standard input"' > Dockerfile | tar -cf - Dockerfile | gzip -9 | kubectl run kaniko \
--rm --stdin=true \
--image=gcr.io/kaniko-project/executor:latest --restart=Never \
--overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"name": "kaniko",
"image": "gcr.io/kaniko-project/executor:latest",
"stdin": true,
"stdinOnce": true,
"args": [
"--insecure",
"--skip-tls-verify",
"--dockerfile=Dockerfile",
"--context=tar://stdin",
"--destination=192.168.1.97:30002/library/myalpine"
],
"volumeMounts": [
{
"name": "docker-config",
"mountPath": "/kaniko/.docker/"
}
]
}
],
"volumes": [
{
"name": "docker-config",
"secret": {
"secretName": "config-json"
}
}
]
}
}'
If I run the kaniko-"Standard Input"-Command I get the following error message:
If you don't see a command prompt, try pressing enter.
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "192.168.1.97:30002/library/myalpine": creating push check transport for 192.168.1.97:30002 failed: GET http://192.168.1.97/service/token?scope=repository%3Alibrary%2Fmyalpine%3Apush%2Cpull&service=harbor-registry: unexpected status code 404 Not Found: <html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
pod "kaniko" deleted
pod default/kaniko terminated (Error)
Why does the GET Request URL don't use the Port 30002?
@SuitDeer i think you forget to add the environment,try to add environment,key is DOCKER_CONFIG,and value /kaniko/.docker,then build again.and you should check the account has push privilige to the project library.
Now I added the environment variable DOCKER_CONFIG to the Command but still I get the same error message
Command:
echo -e 'FROM alpine \nRUN echo "created from standard input"' > Dockerfile | tar -cf - Dockerfile | gzip -9 | kubectl run kaniko \
--rm --stdin=true \
--image=gcr.io/kaniko-project/executor:latest --restart=Never \
--overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"name": "kaniko",
"image": "gcr.io/kaniko-project/executor:latest",
"stdin": true,
"stdinOnce": true,
"env": [
{
"name": "DOCKER_CONFIG",
"value": "/kaniko/.docker"
}
],
"args": [
"--insecure",
"--skip-tls-verify",
"--dockerfile=Dockerfile",
"--context=tar://stdin",
"--destination=192.168.1.97:30002/library/myalpine"
],
"volumeMounts": [
{
"name": "docker-config",
"mountPath": "/kaniko/.docker/"
}
]
}
],
"volumes": [
{
"name": "docker-config",
"secret": {
"secretName": "config-json"
}
}
]
}
}'
Output:
If you don't see a command prompt, try pressing enter.
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "192.168.1.97:30002/library/myalpine": creating push check transport for 192.168.1.97:30002 failed: GET http://192.168.1.97/service/token?scope=repository%3Alibrary%2Fmyalpine%3Apush%2Cpull&service=harbor-registry: unexpected status code 404 Not Found: <html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
pod "kaniko" deleted
pod default/kaniko terminated (Error)
I also checked the permissions for the account "robot$automation" in Harbor and everything seems ok.
I think in the GET Request is missing the port (30002) or I am on the wrong track here?
I have now manually tested the URL in the error Message in my browser:
http://192.168.1.97/service/token?scope=repository%3Alibrary%2Fmyalpine%3Apush%2Cpull&service=harbor-registry
Output:
<html><head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body></html>
If I manually add the Port 30002 to the URL I get a JSON object result back:
http://192.168.1.97:30002/service/token?scope=repository%3Alibrary%2Fmyalpine%3Apush%2Cpull&service=harbor-registry
Output (JSON):
{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IjRHT1g6WkxZRDpXMzZHOkZaNUo6UUhaTDpCNDJPOjJZM006VFZXVTpZSkJCOjZEN0s6U0JaTjpCVTdWIiwidHlwIjoiSldUIn0.eyJpc3MiOiJoYXJib3ItdG9rZW4taXNzdWVyIiwic3ViIjoiYWRtaW4iLCJhdWQiOiJoYXJib3ItcmVnaXN0cnkiLCJleHAiOjE2OTA3MTg2NTgsIm5iZiI6MTY5MDcxNjg1OCwiaWF0IjoxNjkwNzE2ODU4LCJqdGkiOiJZRHpkSkdEQWpIUFdzSmJTIiwiYWNjZXNzIjpbeyJ0eXBlIjoicmVwb3NpdG9yeSIsIm5hbWUiOiJsaWJyYXJ5L215YWxwaW5lIiwiYWN0aW9ucyI6WyJkZWxldGUiLCIqIiwicHVsbCIsInB1c2giXX1dfQ.Idto2hwWh-P-b4J0kvFvaBBho7dBV1K48zFKDxb0AAuQdRqVFqdIon4uvRYcq8PV7JOMkTslA9Y_OqzOFAn8g-FUpVfAaCUx5iGd9E_hsykPLysdTtNiOq1e3_noKW6zbpNk1J8kEu6OyyI9EynDMX9x_QzUoqfzN0qa-Zyh7egbLGUC0anByrphPRpU-gYEhs2Ss1NKXliZ__XhOb9QnOgl5qkSE4lls_kcJeAw5O_YshjLoBRBO3btz7lbfklT69dqQYhSJ9lmKRr9Lj59bCNCUGRjYqmWo0E2yGUAVR6QoBWO0i7AKAxRu_PmXCU3Xei_9k5JhzaVa-9KTtNLKQ","access_token":"","expires_in":1800,"issued_at":"2023-07-30T11:34:18Z"}
Is this a bug in Kaniko @jameshearttech ?
@SuitDeer add the 0400 mode to the docker-config mount.
@SuitDeer I see --destination=192.168.1.97:30002/library/myalpine
. What is 192.168.197? Is that a pod IP? Which pod?
Hi @jameshearttech , 192.168.1.97 is the Host IP of a Kubernetes Node and Harbor is running on NodePort 30002. So I do not use a ingress controller.
@SuitDeer add the 0400 mode to the docker-config mount.
Hi @Queetinliu ,
I have now added the option defaultMode into my command.
.....
"volumes": [
{
"name": "docker-config",
"secret": {
"secretName": "config-json",
"defaultMode": 256
}
}
]
}
}'
.....
INFO: I had to convert the octal-number: 0400 to 256 (dezimal)
But still I get the same error message
i think is something wrong with your harbor,if you use containerd,can you use nerdctl to push image to 192.168.1.97:30002/library/?
@SuitDeer which service in Harbor is configured for node port 30002?
@jameshearttech I have setup harbor with the rancher web-ui as a helm chart. And here is the content of the values.yaml file of the harbor helm chart:
caSecretName: ''
cache:
enabled: false
expireHours: 24
core:
affinity: {}
artifactPullAsyncFlushDuration: null
automountServiceAccountToken: false
gdpr:
deleteUser: false
image:
repository: goharbor/harbor-core
tag: v2.8.2
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
replicas: 1
revisionHistoryLimit: 10
secret: ''
secretName: ''
serviceAccountName: ''
serviceAnnotations: {}
startupProbe:
enabled: true
initialDelaySeconds: 10
tokenCert: ''
tokenKey: ''
tolerations: []
xsrfKey: ''
database:
external:
coreDatabase: registry
existingSecret: ''
host: 192.168.0.1
notaryServerDatabase: notary_server
notarySignerDatabase: notary_signer
password: password
port: '5432'
sslmode: disable
username: user
internal:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/harbor-db
tag: v2.8.2
initContainer:
migrator: {}
permissions: {}
livenessProbe:
timeoutSeconds: 1
nodeSelector: {}
password: changeit
priorityClassName: null
readinessProbe:
timeoutSeconds: 1
serviceAccountName: ''
shmSizeLimit: 512Mi
tolerations: []
maxIdleConns: 100
maxOpenConns: 900
podAnnotations: {}
type: internal
enableMigrateHelmHook: false
existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD
existingSecretSecretKey: ''
exporter:
affinity: {}
automountServiceAccountToken: false
cacheCleanInterval: 14400
cacheDuration: 23
image:
repository: goharbor/harbor-exporter
tag: v2.8.2
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
replicas: 1
revisionHistoryLimit: 10
serviceAccountName: ''
tolerations: []
expose:
clusterIP:
annotations: {}
name: harbor
ports:
httpPort: 80
httpsPort: 443
notaryPort: 4443
ingress:
annotations:
ingress.kubernetes.io/proxy-body-size: '0'
ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: '0'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
className: ''
controller: default
harbor:
annotations: {}
labels: {}
hosts:
core: core.harbor.domain
notary: notary.harbor.domain
kubeVersionOverride: ''
notary:
annotations: {}
labels: {}
loadBalancer:
IP: ''
annotations: {}
name: harbor
ports:
httpPort: 80
httpsPort: 443
notaryPort: 4443
sourceRanges: []
nodePort:
name: harbor
ports:
http:
nodePort: 30002
port: 80
https:
nodePort: 30003
port: 443
notary:
nodePort: 30004
port: 4443
tls:
auto:
commonName: ''
certSource: auto
enabled: false
secret:
notarySecretName: ''
secretName: ''
type: nodePort
externalURL: http://192.168.1.97
harborAdminPassword: Harbor123456
imagePullPolicy: IfNotPresent
imagePullSecrets: null
internalTLS:
certSource: auto
core:
crt: ''
key: ''
secretName: ''
enabled: false
jobservice:
crt: ''
key: ''
secretName: ''
portal:
crt: ''
key: ''
secretName: ''
registry:
crt: ''
key: ''
secretName: ''
trivy:
crt: ''
key: ''
secretName: ''
trustCa: ''
ipFamily:
ipv4:
enabled: true
ipv6:
enabled: true
jobservice:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/harbor-jobservice
tag: v2.8.2
jobLoggers:
- file
loggerSweeperDuration: 14
maxJobWorkers: 10
nodeSelector: {}
notification:
webhook_job_http_client_timeout: 3
webhook_job_max_retry: 3
podAnnotations: {}
priorityClassName: null
reaper:
max_dangling_hours: 168
max_update_hours: 24
replicas: 1
revisionHistoryLimit: 10
secret: ''
serviceAccountName: ''
tolerations: []
logLevel: info
metrics:
core:
path: /metrics
port: 8001
enabled: false
exporter:
path: /metrics
port: 8001
jobservice:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
serviceMonitor:
additionalLabels: {}
enabled: false
interval: ''
metricRelabelings: []
relabelings: []
nginx:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/nginx-photon
tag: v2.8.2
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
replicas: 1
revisionHistoryLimit: 10
serviceAccountName: ''
tolerations: []
notary:
enabled: true
secretName: ''
server:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/notary-server-photon
tag: v2.8.2
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
replicas: 1
serviceAccountName: ''
tolerations: []
serviceAnnotations: {}
signer:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/notary-signer-photon
tag: v2.8.2
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
replicas: 1
serviceAccountName: ''
tolerations: []
persistence:
enabled: true
imageChartStorage:
azure:
accountkey: base64encodedaccountkey
accountname: accountname
container: containername
existingSecret: ''
disableredirect: false
filesystem:
rootdirectory: /storage
gcs:
bucket: bucketname
encodedkey: base64-encoded-json-key-file
existingSecret: ''
useWorkloadIdentity: false
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
bucket: bucketname
region: regionname
s3:
bucket: bucketname
region: us-west-1
swift:
authurl: https://storage.myprovider.com/v3/auth
container: containername
password: password
username: username
type: filesystem
persistentVolumeClaim:
database:
accessMode: ReadWriteOnce
annotations: {}
existingClaim: ''
size: 1Gi
storageClass: ''
subPath: ''
jobservice:
jobLog:
accessMode: ReadWriteOnce
annotations: {}
existingClaim: ''
size: 1Gi
storageClass: ''
subPath: ''
redis:
accessMode: ReadWriteOnce
annotations: {}
existingClaim: ''
size: 1Gi
storageClass: ''
subPath: ''
registry:
accessMode: ReadWriteOnce
annotations: {}
existingClaim: ''
size: 5Gi
storageClass: ''
subPath: ''
trivy:
accessMode: ReadWriteOnce
annotations: {}
existingClaim: ''
size: 5Gi
storageClass: ''
subPath: ''
resourcePolicy: keep
portal:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/harbor-portal
tag: v2.8.2
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
replicas: 1
revisionHistoryLimit: 10
serviceAccountName: ''
tolerations: []
proxy:
components:
- core
- jobservice
- trivy
httpProxy: null
httpsProxy: null
noProxy: 127.0.0.1,localhost,.local,.internal
redis:
external:
addr: 192.168.0.2:6379
coreDatabaseIndex: '0'
existingSecret: ''
jobserviceDatabaseIndex: '1'
password: ''
registryDatabaseIndex: '2'
sentinelMasterSet: ''
trivyAdapterIndex: '5'
username: ''
internal:
affinity: {}
automountServiceAccountToken: false
image:
repository: goharbor/redis-photon
tag: v2.8.2
nodeSelector: {}
priorityClassName: null
serviceAccountName: ''
tolerations: []
podAnnotations: {}
type: internal
registry:
affinity: {}
automountServiceAccountToken: false
controller:
image:
repository: goharbor/harbor-registryctl
tag: v2.8.2
credentials:
existingSecret: ''
password: harbor_registry_password
username: harbor_registry_user
middleware:
cloudFront:
baseurl: example.cloudfront.net
duration: 3000s
ipfilteredby: none
keypairid: KEYPAIRID
privateKeySecret: my-secret
enabled: false
type: cloudFront
nodeSelector: {}
podAnnotations: {}
priorityClassName: null
registry:
image:
repository: goharbor/registry-photon
tag: v2.8.2
relativeurls: false
replicas: 1
revisionHistoryLimit: 10
secret: ''
serviceAccountName: ''
tolerations: []
upload_purging:
age: 168h
dryrun: false
enabled: true
interval: 24h
secretKey: not-a-secure-key
trace:
enabled: false
jaeger:
endpoint: http://hostname:14268/api/traces
otel:
compression: false
endpoint: hostname:4318
insecure: true
timeout: 10
url_path: /v1/traces
provider: jaeger
sample_rate: 1
trivy:
affinity: {}
automountServiceAccountToken: false
debugMode: false
enabled: true
gitHubToken: ''
ignoreUnfixed: false
image:
repository: goharbor/trivy-adapter-photon
tag: v2.8.2
insecure: false
nodeSelector: {}
offlineScan: false
podAnnotations: {}
priorityClassName: null
replicas: 1
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 200m
memory: 512Mi
securityCheck: vuln
serviceAccountName: ''
severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
skipUpdate: false
timeout: 5m0s
tolerations: []
vulnType: os,library
updateStrategy:
type: RollingUpdate
global:
cattle:
systemProjectId: p-qj8cr
Here is the screenshot of all running harbor services:
The Service "harbor" is running on NodePort 30002 an this is a nginx Server of the harbor helm chart
@Queetinliu I have now installed nerdctl on my kubernetes node, but i get the following error:
nerdctl push 192.168.1.97:30002/library/
Output:
FATA[0000] cannot access containerd socket "/run/containerd/containerd.sock": no such file or directory
INFO: I have setup a on-premise racher RKE2 Kubernetes.
Sorry I am fairly new to the topic Kubernetes 😅
I think @Queetinliu you are right... I have setup another machine in my home network (192.168.1.98) Ubuntu. I have installed docker on it and if I want to push to my local harbor repository I get the same error message like before
sudo docker push 192.168.1.97/library/myalpine:latest
The push refers to repository [192.168.1.97/library/myalpine]
6aa151f54882: Preparing
501c9649366f: Preparing
514fd0c9311e: Preparing
59c56aee1fb4: Preparing
error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>404 Not Found</title></head>\r\n<body>\r\n<center><h1>404 Not Found</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
How do manage to setup your harbor repository on Kubernetes via NodePort?
@jameshearttech I have setup harbor with the rancher web-ui as a helm chart. And here is the content of the values.yaml file of the harbor helm chart:
caSecretName: '' cache: enabled: false expireHours: 24 core: affinity: {} artifactPullAsyncFlushDuration: null automountServiceAccountToken: false gdpr: deleteUser: false image: repository: goharbor/harbor-core tag: v2.8.2 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 secret: '' secretName: '' serviceAccountName: '' serviceAnnotations: {} startupProbe: enabled: true initialDelaySeconds: 10 tokenCert: '' tokenKey: '' tolerations: [] xsrfKey: '' database: external: coreDatabase: registry existingSecret: '' host: 192.168.0.1 notaryServerDatabase: notary_server notarySignerDatabase: notary_signer password: password port: '5432' sslmode: disable username: user internal: affinity: {} automountServiceAccountToken: false image: repository: goharbor/harbor-db tag: v2.8.2 initContainer: migrator: {} permissions: {} livenessProbe: timeoutSeconds: 1 nodeSelector: {} password: changeit priorityClassName: null readinessProbe: timeoutSeconds: 1 serviceAccountName: '' shmSizeLimit: 512Mi tolerations: [] maxIdleConns: 100 maxOpenConns: 900 podAnnotations: {} type: internal enableMigrateHelmHook: false existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD existingSecretSecretKey: '' exporter: affinity: {} automountServiceAccountToken: false cacheCleanInterval: 14400 cacheDuration: 23 image: repository: goharbor/harbor-exporter tag: v2.8.2 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] expose: clusterIP: annotations: {} name: harbor ports: httpPort: 80 httpsPort: 443 notaryPort: 4443 ingress: annotations: ingress.kubernetes.io/proxy-body-size: '0' ingress.kubernetes.io/ssl-redirect: 'true' nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/ssl-redirect: 'true' className: '' controller: default harbor: annotations: {} labels: {} hosts: core: core.harbor.domain notary: notary.harbor.domain kubeVersionOverride: '' notary: annotations: {} labels: {} loadBalancer: IP: '' annotations: {} name: harbor ports: httpPort: 80 httpsPort: 443 notaryPort: 4443 sourceRanges: [] nodePort: name: harbor ports: http: nodePort: 30002 port: 80 https: nodePort: 30003 port: 443 notary: nodePort: 30004 port: 4443 tls: auto: commonName: '' certSource: auto enabled: false secret: notarySecretName: '' secretName: '' type: nodePort externalURL: http://192.168.1.97 harborAdminPassword: Harbor123456 imagePullPolicy: IfNotPresent imagePullSecrets: null internalTLS: certSource: auto core: crt: '' key: '' secretName: '' enabled: false jobservice: crt: '' key: '' secretName: '' portal: crt: '' key: '' secretName: '' registry: crt: '' key: '' secretName: '' trivy: crt: '' key: '' secretName: '' trustCa: '' ipFamily: ipv4: enabled: true ipv6: enabled: true jobservice: affinity: {} automountServiceAccountToken: false image: repository: goharbor/harbor-jobservice tag: v2.8.2 jobLoggers: - file loggerSweeperDuration: 14 maxJobWorkers: 10 nodeSelector: {} notification: webhook_job_http_client_timeout: 3 webhook_job_max_retry: 3 podAnnotations: {} priorityClassName: null reaper: max_dangling_hours: 168 max_update_hours: 24 replicas: 1 revisionHistoryLimit: 10 secret: '' serviceAccountName: '' tolerations: [] logLevel: info metrics: core: path: /metrics port: 8001 enabled: false exporter: path: /metrics port: 8001 jobservice: path: /metrics port: 8001 registry: path: /metrics port: 8001 serviceMonitor: additionalLabels: {} enabled: false interval: '' metricRelabelings: [] relabelings: [] nginx: affinity: {} automountServiceAccountToken: false image: repository: goharbor/nginx-photon tag: v2.8.2 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] notary: enabled: true secretName: '' server: affinity: {} automountServiceAccountToken: false image: repository: goharbor/notary-server-photon tag: v2.8.2 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 serviceAccountName: '' tolerations: [] serviceAnnotations: {} signer: affinity: {} automountServiceAccountToken: false image: repository: goharbor/notary-signer-photon tag: v2.8.2 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 serviceAccountName: '' tolerations: [] persistence: enabled: true imageChartStorage: azure: accountkey: base64encodedaccountkey accountname: accountname container: containername existingSecret: '' disableredirect: false filesystem: rootdirectory: /storage gcs: bucket: bucketname encodedkey: base64-encoded-json-key-file existingSecret: '' useWorkloadIdentity: false oss: accesskeyid: accesskeyid accesskeysecret: accesskeysecret bucket: bucketname region: regionname s3: bucket: bucketname region: us-west-1 swift: authurl: https://storage.myprovider.com/v3/auth container: containername password: password username: username type: filesystem persistentVolumeClaim: database: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 1Gi storageClass: '' subPath: '' jobservice: jobLog: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 1Gi storageClass: '' subPath: '' redis: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 1Gi storageClass: '' subPath: '' registry: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 5Gi storageClass: '' subPath: '' trivy: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 5Gi storageClass: '' subPath: '' resourcePolicy: keep portal: affinity: {} automountServiceAccountToken: false image: repository: goharbor/harbor-portal tag: v2.8.2 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] proxy: components: - core - jobservice - trivy httpProxy: null httpsProxy: null noProxy: 127.0.0.1,localhost,.local,.internal redis: external: addr: 192.168.0.2:6379 coreDatabaseIndex: '0' existingSecret: '' jobserviceDatabaseIndex: '1' password: '' registryDatabaseIndex: '2' sentinelMasterSet: '' trivyAdapterIndex: '5' username: '' internal: affinity: {} automountServiceAccountToken: false image: repository: goharbor/redis-photon tag: v2.8.2 nodeSelector: {} priorityClassName: null serviceAccountName: '' tolerations: [] podAnnotations: {} type: internal registry: affinity: {} automountServiceAccountToken: false controller: image: repository: goharbor/harbor-registryctl tag: v2.8.2 credentials: existingSecret: '' password: harbor_registry_password username: harbor_registry_user middleware: cloudFront: baseurl: example.cloudfront.net duration: 3000s ipfilteredby: none keypairid: KEYPAIRID privateKeySecret: my-secret enabled: false type: cloudFront nodeSelector: {} podAnnotations: {} priorityClassName: null registry: image: repository: goharbor/registry-photon tag: v2.8.2 relativeurls: false replicas: 1 revisionHistoryLimit: 10 secret: '' serviceAccountName: '' tolerations: [] upload_purging: age: 168h dryrun: false enabled: true interval: 24h secretKey: not-a-secure-key trace: enabled: false jaeger: endpoint: http://hostname:14268/api/traces otel: compression: false endpoint: hostname:4318 insecure: true timeout: 10 url_path: /v1/traces provider: jaeger sample_rate: 1 trivy: affinity: {} automountServiceAccountToken: false debugMode: false enabled: true gitHubToken: '' ignoreUnfixed: false image: repository: goharbor/trivy-adapter-photon tag: v2.8.2 insecure: false nodeSelector: {} offlineScan: false podAnnotations: {} priorityClassName: null replicas: 1 resources: limits: cpu: 1 memory: 1Gi requests: cpu: 200m memory: 512Mi securityCheck: vuln serviceAccountName: '' severity: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL skipUpdate: false timeout: 5m0s tolerations: [] vulnType: os,library updateStrategy: type: RollingUpdate global: cattle: systemProjectId: p-qj8cr
Here is the screenshot of all running harbor services:
https://imgur.com/a/UJIxCNk
The Service "harbor" is running on NodePort 30002 an this is a nginx Server of the harbor helm chart
https://imgur.com/a/daiCPNp
@SuitDeer I'm looking at a dev cluster and we use ingress to access Harbor. I see the backends in the harbor ingress are paths on the harbor-core service. Looks like all traffic goes to harbor-core except /, which goes to harbor-portal. I know registry traffic goes through harbor-core, which communicates with the harbor-registry service.
If you want Kaniko to connect to Harbor and they are both running in K8s you can use cluster DNS. For example, in this cluster the cluster DNS FQDN for the harbor-core service would be harbor-core.harbor.svc.cluster.local. If you want Kaniko to connect to Harbor using nodePort you need to configure the harbor-core service with nodePort. Is the harbor-core service configured with nodePort?
Btw, why not just use ingress?
$ kubectl get service -n harbor -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
harbor-core ClusterIP 10.96.165.199 <none> 80/TCP 186d app=harbor,component=core,release=harbor
harbor-database ClusterIP 10.110.251.142 <none> 5432/TCP 186d app=harbor,component=database,release=harbor
harbor-jobservice ClusterIP 10.108.234.165 <none> 80/TCP 186d app=harbor,component=jobservice,release=harbor
harbor-notary-server ClusterIP 10.101.59.43 <none> 4443/TCP 186d app=harbor,component=notary-server,release=harbor
harbor-notary-signer ClusterIP 10.107.145.237 <none> 7899/TCP 186d app=harbor,component=notary-signer,release=harbor
harbor-portal ClusterIP 10.106.70.205 <none> 80/TCP 186d app=harbor,component=portal,release=harbor
harbor-redis ClusterIP 10.105.95.119 <none> 6379/TCP 186d app=harbor,component=redis,release=harbor
harbor-registry ClusterIP 10.106.43.14 <none> 5000/TCP,8080/TCP 186d app=harbor,component=registry,release=harbor
harbor-trivy ClusterIP 10.108.75.67 <none> 8080/TCP 186d app=harbor,component=trivy,release=harbor
$ kubectl get ingress/harbor-ingress -n harbor -o yaml | yq .spec.rules
- host: harbor.k8s.integrisit.dev
http:
paths:
- backend:
service:
name: harbor-core
port:
number: 80
path: /api/
pathType: Prefix
- backend:
service:
name: harbor-core
port:
number: 80
path: /service/
pathType: Prefix
- backend:
service:
name: harbor-core
port:
number: 80
path: /v2/
pathType: Prefix
- backend:
service:
name: harbor-core
port:
number: 80
path: /chartrepo/
pathType: Prefix
- backend:
service:
name: harbor-core
port:
number: 80
path: /c/
pathType: Prefix
- backend:
service:
name: harbor-portal
port:
number: 80
path: /
pathType: Prefix
On this weekend I had finally time to get to the problem again. I now run a private container registry on another machine (Docker Private Registry) and I can flawlessly push images with kaniko on my Kubernetes cluster. Thanks for the help everyone 👍
I had the same issues until I understood that DOCKER_CONFIG
must point to the folder containing config.json
, not to the file itself. After I got that and configured it accordingly, all went butter smoothly.
Just use this type of config:
{
"auths": {
"harbor.domain.name": {
"auth": "base64(login:password)"
}
}
}
Note that in my setup I wasn't prefixing the harbor domain name with http://
or https://
(it is running on https), wasn't postfixing it with /v2
, and it was working pretty well.
After all my problem was broken NTP (systemd-timesyncd) on kubernetes nodes. NTP syncronization fixed my problem with "401 unauthorized"
my error UNAUTHORIZED: unauthorized to access repository for debug issue check config.json in runner pod: kubectl exec -it -n gitlab-runner runner-pod -- cat /kaniko/.docker/config.json my problem with auth parameter, insted of using $(printf "%s:%s" "${DOCKER_REGISTRY_USER}" "${DOCKER_REGISTRY_PASSWORD}" | base64 | tr -d) use $(echo -n "${DOCKER_REGISTRY_USER}":"${DOCKER_REGISTRY_PASSWORD}" | base64 ) and my issue solved.
I had the same issue and resolved it by:
echo -n 'robot$something:pass' | base64
note that bash won't try to expand $something when using single quotes and thus it returns different result from:
echo -n "robot$something:pass" | base64