helm3-charts
helm3-charts copied to clipboard
After manual nexus pod restart docker repository url returning 404
I needed to restart a pod to resize bound pvc. After i did it, pvc became resized, but docker registry became unavailable (now url returns 404 nginx message). Only thing that i did change was persistence.storageSize value. Nexus is functioning normally, i can work with main url and other registries, but docker registry is seems to be down. DNS and ingress are configured properly.
What could be the reason? I've been stumbling in this issue all day
My values
---
statefulset:
# This is not supported
enabled: false
deploymentStrategy: Recreate
image:
# Sonatype Official Public Image
repository: sonatype/nexus3
tag: 3.41.0
pullPolicy: IfNotPresent
imagePullSecrets:
# for image registries that require login, specify the name of the existing
# kubernetes secret
# - name: <pull-secret-name>
nexus:
docker:
enabled: true
registries:
- port: 5000
host: docker-registry.mydomain.com
secretName: docker-registry-tls
env:
# minimum recommended memory settings for a small, person instance from
# https://help.sonatype.com/repomanager3/product-information/system-requirements
- name: INSTALL4J_ADD_VM_PARAMS
value: |-
-Xms2048m -Xmx2048m
-XX:ActiveProcessorCount=4
-XX:MaxDirectMemorySize=2703M
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-Djava.util.prefs.userRoot=/nexus-data/javaprefs
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: 'true'
properties:
override: false
data:
nexus.scripts.allowCreation: true
# See this article for ldap configuratioon options https://support.sonatype.com/hc/en-us/articles/216597138-Setting-Advanced-LDAP-Connection-Properties-in-Nexus-Repository-Manager
# nexus.ldap.env.java.naming.security.authentication: simple
# nodeSelector:
# cloud.google.com/gke-nodepool: default-pool
resources:
# minimum recommended memory settings for a small, person instance from
# https://help.sonatype.com/repomanager3/product-information/system-requirements
# requests:
# cpu: 4
# memory: 8Gi
# limits:
# cpu: 4
# memory: 8Gi
# The ports should only be changed if the nexus image uses a different port
nexusPort: 8081
# Default the pods UID and GID to match the nexus3 container.
# Customize or remove these values from the securityContext as appropriate for
# your deployment environment.
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
podAnnotations: {}
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 6
timeoutSeconds: 10
path: /
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 6
timeoutSeconds: 10
path: /
# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
# hostnames:
# - "example.com"
# - "www.example.com"
nameOverride: ''
fullnameOverride: ''
deployment:
# # Add annotations in deployment to enhance deployment configurations
annotations: {}
# # Add init containers. e.g. to be used to give specific permissions for nexus-data.
# # Add your own init container or uncomment and modify the given example.
initContainers:
# - name: fmp-volume-permission
# image: busybox
# imagePullPolicy: IfNotPresent
# command: ['chown','-R', '200', '/nexus-data']
# volumeMounts:
# - name: nexus-data
# mountPath: /nexus-data
# Uncomment and modify this to run a command after starting the nexus container.
postStart:
command: # '["/bin/sh", "-c", "ls"]'
preStart:
command: # '["/bin/rm", "-f", "/path/to/lockfile"]'
terminationGracePeriodSeconds: 120
additionalContainers:
additionalVolumes:
additionalVolumeMounts:
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: '0'
kubernetes.io/ingress.allow-http: 'false'
cert-manager.io/cluster-issuer: letsencrypt-issuer
hostPath: /
hostRepo: storage.mydomain.com
tls:
- secretName: nexus-tls
hosts:
- storage.mydomain.com
service:
name: nexus3
enabled: true
labels: {}
annotations: {}
type: ClusterIP
route:
enabled: false
name: docker
portName: docker
labels:
annotations:
# path: /docker
nexusProxyRoute:
enabled: false
labels:
annotations:
# path: /nexus
persistence:
enabled: true
accessMode: ReadWriteOnce
## If defined, storageClass: <storageClass>
## If set to "-", storageClass: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClass spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# existingClaim:
annotations:
'helm.sh/resource-policy': keep
# storageClass: "-"
storageSize: 128Gi
# If PersistentDisk already exists you can create a PV for it by including the 2 following keypairs.
# pdName: nexus-data-disk
# fsType: ext4
tolerations: []
# Enable configmap and add data in configmap
config:
enabled: false
mountPath: /sonatype-nexus-conf
data: []
# # To use an additional secret, set enable to true and add data
secret:
enabled: false
mountPath: /etc/secret-volume
readOnly: true
data: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ''
I found out that docker registry ingress is not assigning ip address to it I tried to delete whole chart (except pvcs) and reinstall it, but nothing has changed. Main ingress has ip address assigned to it, and docker registry ingress does not have it.
That is weird because everything worked fine before with same values, i only changed size of the persistent storage
Okay i figured out what caused this issue.
Somehow, chart did not assign spec.ingressClassName: 'nginx'
to docker registry ingress. When i assigned it with kubectl edit
, it solved the issue. I guess it should be fixed in chart
Hi,
Thanks for the hint about the ingress class. That solved the same problem I had (404)!
I looked in the chart template for ingress and found out that you can set extraLabels and annotations on the docker ingress. They should come after "port".
docker:
enabled: true
registries:
- host: registry-nexus.example.com
port: 5003
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "2G"
nginx.org/client-max-body-size: 2G
So I used the deprecated kubernetes.io/ingress.class annotation for now.
I think you are right they should support the newer ingressClassName spec.
Regards, Roelof.
Unfortunately the deprecated ingress.class annotation is removed by helm. I fixed it by adding a kustomization.yaml
patchesJson6902:
- path: patch-ingress-docker.yaml
target:
group: networking.k8s.io
kind: Ingress
name: nexus-repository-manager-docker-5000
version: v1
patch-ingress-docker.yaml:
- op: add
path: "/spec/ingressClassName"
value: "nginx"
But off course the chart should still be fixed!