helm icon indicating copy to clipboard operation
helm copied to clipboard

Try Upgrade 6.6.10 to 7.0.0

Open Xenon-777 opened this issue 6 months ago • 10 comments

Describe your Issue

I get This Error:

Logs and Errors

Error: UPGRADE FAILED: cannot patch "nextcloud-mariadb" with kind StatefulSet: StatefulSet.apps "nextcloud-mariadb" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

Describe your Environment

  • Kubernetes distribution: k3s

  • Helm Version (or App that manages helm): Rancher 2.11.3

  • values.yaml:

affinity: {}
collabora:
  autoscaling:
    enabled: false
  collabora:
    aliasgroups: []
    existingSecret:
      enabled: false
      passwordKey: xxx
      secretName: ''
      usernameKey: xxx
    extra_params: '--o:ssl.enable=false'
    password: xxx
    server_name: null
    username: xxx
  enabled: false
  ingress:
    annotations: {}
    className: ''
    enabled: false
    hosts:
      - host: chart-example.local
        paths:
          - path: /
            pathType: ImplementationSpecific
    tls: []
  resources: {}
cronjob:
  command:
    - /cron.sh
  enabled: false
  lifecycle: {}
  resources: {}
  securityContext: {}
deploymentAnnotations: {}
deploymentLabels: {}
dnsConfig: {}
externalDatabase:
  database: nextcloud
  enabled: true
  existingSecret:
    enabled: false
    passwordKey: xxx
    usernameKey: xxx
  host: xxx
  password: xxx
  type: mysql
  user: xxx
fullnameOverride: ''
hpa:
  cputhreshold: 60
  enabled: false
  maxPods: 10
  minPods: 1
image:
  flavor: apache
  pullPolicy: IfNotPresent
  repository: nextcloud
  tag: null
imaginary:
  enabled: false
  image:
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: h2non/imaginary
    tag: 1.2.4
  livenessProbe:
    enabled: true
    failureThreshold: 3
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  nodeSelector: {}
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  readinessProbe:
    enabled: true
    failureThreshold: 3
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
  replicaCount: 1
  resources: {}
  securityContext:
    runAsNonRoot: true
    runAsUser: xxx
  service:
    annotations: {}
    labels: {}
    loadBalancerIP: null
    nodePort: null
    type: ClusterIP
  tolerations: []
ingress:
  annotations: {}
  enabled: false
  labels: {}
  path: /
  pathType: Prefix
internalDatabase:
  enabled: false
  name: xxx
lifecycle: {}
livenessProbe:
  enabled: true
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
mariadb:
  architecture: standalone
  auth:
    customPasswordFiles: {}
    database: xxx
    existingSecret: ''
    forcePassword: false
    password: xxx
    replicationPassword: xxx
    replicationUser: xxx
    rootPassword: xxx
    usePasswordFiles: false
    username: xxx
  clusterDomain: xxx
  common:
    exampleValue: common-chart
    global:
      compatibility:
        openshift:
          adaptSecurityContext: auto
      defaultStorageClass: ''
      imagePullSecrets: []
      imageRegistry: ''
      storageClass: ''
  commonAnnotations: {}
  commonLabels: {}
  diagnosticMode:
    args:
      - infinity
    command:
      - sleep
    enabled: false
  enabled: true
  extraDeploy: []
  fullnameOverride: ''
  global:
    compatibility:
      openshift:
        adaptSecurityContext: auto
    defaultStorageClass: ''
    imagePullSecrets: []
    imageRegistry: ''
    storageClass: ''
  image:
    debug: false
    digest: ''
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/mariadb
    tag: 11.3.2-debian-12-r5
  initdbScripts: {}
  initdbScriptsConfigMap: ''
  kubeVersion: ''
  metrics:
    annotations:
      prometheus.io/port: 'xxx'
      prometheus.io/scrape: 'true'
    containerPorts:
      http: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: false
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    enabled: false
    extraArgs:
      primary: []
      secondary: []
    extraVolumeMounts:
      primary: []
      secondary: []
    image:
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/mysqld-exporter
      tag: 0.15.1-debian-12-r16
    livenessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 120
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    prometheusRule:
      additionalLabels: {}
      enabled: false
      namespace: ''
      rules: []
    readinessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    resourcesPreset: nano
    serviceMonitor:
      enabled: false
      honorLabels: false
      interval: 30s
      jobLabel: ''
      labels: {}
      metricRelabelings: []
      namespace: ''
      relabelings: []
      scrapeTimeout: ''
      selector: {}
  nameOverride: ''
  networkPolicy:
    allowExternal: true
    allowExternalEgress: true
    enabled: true
    extraEgress: []
    extraIngress: []
    ingressNSMatchLabels: {}
    ingressNSPodMatchLabels: {}
  primary:
    affinity: {}
    args: []
    automountServiceAccountToken: false
    command: []
    configuration: |-
      [mysqld]
      sxxx

      [client]
      xxx

      [manager]
      xxx
    containerPorts:
      mysql: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    existingConfigmap: ''
    extraEnvVars: []
    extraEnvVarsCM: ''
    extraEnvVarsSecret: ''
    extraFlags: ''
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    initContainers: []
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 120
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: primary
    nodeAffinityPreset:
      key: ''
      type: ''
      values: []
    nodeSelector: {}
    pdb:
      create: true
      maxUnavailable: ''
      minAvailable: ''
    persistence:
      accessMode: xxx
      accessModes:
        - xxx
      annotations: {}
      enabled: true
      existingClaim: ''
      labels: {}
      selector: {}
      size: xxx
      storageClass: ''
      subPath: ''
    podAffinityPreset: ''
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podManagementPolicy: ''
    podSecurityContext:
      enabled: true
      fsGroup: xxx
      fsGroupChangePolicy: Always
      supplementalGroups: []
      sysctls: []
    priorityClassName: ''
    readinessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    resourcesPreset: micro
    revisionHistoryLimit: 10
    rollingUpdatePartition: ''
    runtimeClassName: ''
    schedulerName: ''
    service:
      annotations: {}
      clusterIP: ''
      externalTrafficPolicy: Cluster
      extraPorts: []
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      nodePorts:
        mysql: ''
      ports:
        metrics: xxx
        mysql: xxx
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    sidecars: []
    startupProbe:
      enabled: false
      failureThreshold: 10
      initialDelaySeconds: 120
      periodSeconds: 15
      successThreshold: 1
      timeoutSeconds: 5
    startupWaitOptions: {}
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      type: RollingUpdate
  rbac:
    create: false
  runtimeClassName: ''
  schedulerName: ''
  secondary:
    affinity: {}
    args: []
    automountServiceAccountToken: false
    command: []
    configuration: |-
      [mysqld]
      xxx

      [client]
      xxx

      [manager]
      xxx
    containerPorts:
      mysql: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    existingConfigmap: ''
    extraEnvVars: []
    extraEnvVarsCM: ''
    extraEnvVarsSecret: ''
    extraFlags: ''
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    initContainers: []
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 120
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: secondary
    nodeAffinityPreset:
      key: ''
      type: ''
      values: []
    nodeSelector: {}
    pdb:
      create: true
      maxUnavailable: ''
      minAvailable: ''
    persistence:
      accessModes:
        - xxx
      annotations: {}
      enabled: true
      labels: {}
      selector: {}
      size: xxx
      storageClass: ''
      subPath: ''
    podAffinityPreset: ''
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podManagementPolicy: ''
    podSecurityContext:
      enabled: true
      fsGroup: xxx
      fsGroupChangePolicy: Always
      supplementalGroups: []
      sysctls: []
    priorityClassName: ''
    readinessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    replicaCount: 1
    resources: {}
    resourcesPreset: micro
    revisionHistoryLimit: 10
    rollingUpdatePartition: ''
    runtimeClassName: ''
    schedulerName: ''
    service:
      annotations: {}
      clusterIP: ''
      externalTrafficPolicy: Cluster
      extraPorts: []
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      nodePorts:
        mysql: ''
      ports:
        metrics: xxx
        mysql: xxx
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    sidecars: []
    startupProbe:
      enabled: false
      failureThreshold: 10
      initialDelaySeconds: 120
      periodSeconds: 15
      successThreshold: 1
      timeoutSeconds: 5
    startupWaitOptions: {}
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      type: RollingUpdate
  serviceAccount:
    annotations: {}
    automountServiceAccountToken: false
    create: true
    name: ''
  serviceBindings:
    enabled: false
  volumePermissions:
    enabled: false
    image:
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/os-shell
      tag: 12-debian-12-r21
    resources: {}
    resourcesPreset: nano
metrics:
  affinity: {}
  enabled: false
  https: false
  image:
    pullPolicy: IfNotPresent
    repository: xperimental/nextcloud-exporter
    tag: 0.6.2
  info:
    apps: false
  nodeSelector:
    worker: 'true'
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  replicaCount: 1
  resources: {}
  securityContext:
    runAsNonRoot: true
    runAsUser: xxx
  server: ''
  service:
    annotations:
      prometheus.io/port: 'xxx'
      prometheus.io/scrape: 'true'
    labels: {}
    loadBalancerIP: null
    type: ClusterIP
  serviceMonitor:
    enabled: false
    interval: 30s
    jobLabel: ''
    labels: {}
    namespace: ''
    namespaceSelector: null
    scrapeTimeout: ''
  timeout: 5s
  tlsSkipVerify: false
  token: ''
  tolerations: []
nameOverride: ''
nextcloud:
  configs: {}
  containerPort: xxx
  datadir: /xxx
  defaultConfigs:
    .htaccess: true
    apache-pretty-urls.config.php: true
    apcu.config.php: true
    apps.config.php: true
    autoconfig.php: true
    imaginary.config.php: false
    redis.config.php: true
    reverse-proxy.config.php: true
    s3.config.php: true
    smtp.config.php: true
    swift.config.php: true
    upgrade-disable-web.config.php: true
  existingSecret:
    enabled: false
    passwordKey: xxx
    smtpHostKey: xxx
    smtpPasswordKey: xxx
    smtpUsernameKey: xxx
    tokenKey: ''
    usernameKey: xxx
  extraEnv: null
  extraInitContainers: []
  extraSidecarContainers: []
  extraVolumeMounts: null
  extraVolumes: null
  hooks:
    before-starting: null
    post-installation: null
    post-upgrade: null
    pre-installation: null
    pre-upgrade: null
  host: xxx
  mail:
    domain: xxx
    enabled: false
    fromAddress: xxx
    smtp:
      authtype: LOGIN
      host: xxx
      name: xxx
      password: xxx
      port: xxx
      secure: ssl
  mariaDbInitContainer:
    resources: {}
    securityContext: {}
  objectStore:
    s3:
      accessKey: ''
      autoCreate: false
      bucket: ''
      enabled: false
      existingSecret: ''
      host: ''
      legacyAuth: false
      port: '443'
      prefix: ''
      region: eu-west-1
      secretKey: ''
      secretKeys:
        accessKey: ''
        bucket: ''
        host: ''
        secretKey: ''
        sse_c_key: ''
      sse_c_key: ''
      ssl: true
      storageClass: STANDARD
      usePathStyle: false
    swift:
      autoCreate: false
      container: ''
      enabled: false
      project:
        domain: Default
        name: ''
      region: ''
      service: swift
      url: ''
      user:
        domain: Default
        name: ''
        password: ''
  password: xxx
  persistence:
    subPath: null
  phpConfigs: {}
  podSecurityContext: {}
  postgreSqlInitContainer:
    resources: {}
    securityContext: {}
  securityContext: {}
  strategy:
    type: Recreate
  trustedDomains: []
  update: 0
  username: xxx
nginx:
  config:
    custom: null
    default: true
    headers:
      Referrer-Policy: no-referrer
      Strict-Transport-Security: ''
      X-Content-Type-Options: nosniff
      X-Download-Options: noopen
      X-Frame-Options: SAMEORIGIN
      X-Permitted-Cross-Domain-Policies: none
      X-Robots-Tag: noindex, nofollow
      X-XSS-Protection: 1; mode=block
  containerPort: 80
  enabled: false
  extraEnv: []
  image:
    pullPolicy: IfNotPresent
    repository: nginx
    tag: alpine
  ipFamilies:
    - IPv4
  resources: {}
  securityContext: {}
nodeSelector:
  worker: 'true'
persistence:
  accessMode: xxx
  annotations: {}
  enabled: true
  nextcloudData:
    accessMode: xxx
    annotations: {}
    enabled: true
    size: 100Gi
    subPath: null
  size: 20Gi
phpClientHttpsFix:
  enabled: false
  protocol: https
podAnnotations: {}
postgresql:
  enabled: false
  global:
    postgresql:
      auth:
        database: xxx
        existingSecret: ''
        password: xxx
        secretKeys:
          adminPasswordKey: ''
          replicationPasswordKey: ''
          userPasswordKey: ''
        username: xxx
  primary:
    persistence:
      enabled: false
rbac:
  enabled: true
  serviceaccount:
    annotations: {}
    create: true
    name: xxx
readinessProbe:
  enabled: true
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
redis:
  architecture: replication
  auth:
    enabled: false
    existingSecret: ''
    existingSecretPasswordKey: ''
    password: xxx
    sentinel: true
    usePasswordFileFromSecret: true
    usePasswordFiles: false
  clusterDomain: xxx
  common:
    exampleValue: xxx
    global:
      compatibility:
        openshift:
          adaptSecurityContext: auto
      defaultStorageClass: ''
      imagePullSecrets: []
      imageRegistry: ''
      redis:
        password: ''
      storageClass: ''
  commonAnnotations: {}
  commonConfiguration: |-
    xxx
  commonLabels: {}
  diagnosticMode:
    args:
      - infinity
    command:
      - sleep
    enabled: false
  enabled: true
  existingConfigmap: ''
  extraDeploy: []
  fullnameOverride: ''
  global:
    compatibility:
      openshift:
        adaptSecurityContext: auto
    defaultStorageClass: ''
    imagePullSecrets: []
    imageRegistry: ''
    redis:
      password: ''
    storageClass: ''
  image:
    debug: false
    digest: ''
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/redis
    tag: 7.2.5-debian-12-r4
  kubeVersion: ''
  kubectl:
    command:
      - /opt/bitnami/scripts/kubectl-scripts/update-master-label.sh
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    image:
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/kubectl
      tag: 1.30.3-debian-12-r4
    resources:
      limits: {}
      requests: {}
  master:
    affinity: {}
    args: []
    automountServiceAccountToken: false
    command: []
    configuration: ''
    containerPorts:
      redis: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    count: 1
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    disableCommands:
      - FLUSHDB
      - FLUSHALL
    dnsConfig: {}
    dnsPolicy: ''
    enableServiceLinks: true
    extraEnvVars: []
    extraEnvVarsCM: ''
    extraEnvVarsSecret: ''
    extraFlags: []
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    initContainers: []
    kind: StatefulSet
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 5
    minReadySeconds: 0
    nodeAffinityPreset:
      key: ''
      type: ''
      values: []
    nodeSelector: {}
    pdb:
      create: true
      maxUnavailable: ''
      minAvailable: ''
    persistence:
      accessModes:
        - xxx
      annotations: {}
      dataSource: {}
      enabled: true
      existingClaim: ''
      labels: {}
      medium: ''
      path: /xxx
      selector: {}
      size: xxx
      sizeLimit: ''
      storageClass: ''
      subPath: ''
      subPathExpr: ''
    persistentVolumeClaimRetentionPolicy:
      enabled: false
      whenDeleted: Retain
      whenScaled: Retain
    podAffinityPreset: ''
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podSecurityContext:
      enabled: true
      fsGroup: xxx
      fsGroupChangePolicy: Always
      supplementalGroups: []
      sysctls: []
    preExecCmds: []
    priorityClassName: ''
    readinessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    resourcesPreset: nano
    revisionHistoryLimit: 10
    schedulerName: ''
    service:
      annotations: {}
      clusterIP: ''
      externalIPs: []
      externalTrafficPolicy: Cluster
      extraPorts: []
      internalTrafficPolicy: Cluster
      loadBalancerClass: ''
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      nodePorts:
        redis: ''
      portNames:
        redis: tcp-redis
      ports:
        redis: xxx
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    serviceAccount:
      annotations: {}
      automountServiceAccountToken: false
      create: true
      name: ''
    shareProcessNamespace: false
    sidecars: []
    startupProbe:
      enabled: false
      failureThreshold: 5
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 5
    terminationGracePeriodSeconds: 30
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      type: RollingUpdate
  metrics:
    command: []
    containerPorts:
      http: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    enabled: false
    extraArgs: {}
    extraEnvVars: []
    extraVolumeMounts: []
    extraVolumes: []
    image:
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/redis-exporter
      tag: 1.62.0-debian-12-r2
    livenessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    podAnnotations:
      prometheus.io/port: 'xxx'
      prometheus.io/scrape: 'true'
    podLabels: {}
    podMonitor:
      additionalEndpoints: []
      additionalLabels: {}
      enabled: false
      honorLabels: false
      interval: 30s
      metricRelabelings: []
      namespace: ''
      podTargetLabels: []
      port: metrics
      relabelings: []
      relabellings: []
      sampleLimit: false
      scrapeTimeout: ''
      targetLimit: false
    prometheusRule:
      additionalLabels: {}
      enabled: false
      namespace: ''
      rules: []
    readinessProbe:
      enabled: true
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    redisTargetHost: localhost
    resources: {}
    resourcesPreset: nano
    service:
      annotations: {}
      clusterIP: ''
      enabled: true
      externalTrafficPolicy: Cluster
      extraPorts: []
      loadBalancerClass: ''
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      ports:
        http: xxx
      type: ClusterIP
    serviceMonitor:
      additionalEndpoints: []
      additionalLabels: {}
      enabled: false
      honorLabels: false
      interval: 30s
      metricRelabelings: []
      namespace: ''
      podTargetLabels: []
      port: http-metrics
      relabelings: []
      relabellings: []
      sampleLimit: false
      scrapeTimeout: ''
      targetLimit: false
    startupProbe:
      enabled: false
      failureThreshold: 5
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
  nameOverride: ''
  nameResolutionThreshold: 5
  nameResolutionTimeout: 5
  namespaceOverride: ''
  networkPolicy:
    allowExternal: true
    allowExternalEgress: true
    enabled: true
    extraEgress: []
    extraIngress: []
    ingressNSMatchLabels: {}
    ingressNSPodMatchLabels: {}
    metrics:
      allowExternal: true
      ingressNSMatchLabels: {}
      ingressNSPodMatchLabels: {}
  pdb: {}
  podSecurityPolicy:
    create: false
    enabled: false
  rbac:
    create: false
    rules: []
  replica:
    affinity: {}
    args: []
    automountServiceAccountToken: false
    autoscaling:
      enabled: false
      maxReplicas: 11
      minReplicas: 1
      targetCPU: ''
      targetMemory: ''
    command: []
    configuration: ''
    containerPorts:
      redis: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    disableCommands:
      - FLUSHDB
      - FLUSHALL
    dnsConfig: {}
    dnsPolicy: ''
    enableServiceLinks: true
    externalMaster:
      enabled: false
      host: ''
      port: xxx
    extraEnvVars: []
    extraEnvVarsCM: ''
    extraEnvVarsSecret: ''
    extraFlags: []
    extraVolumeMounts: []
    extraVolumes: []
    hostAliases: []
    initContainers: []
    kind: StatefulSet
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 5
    minReadySeconds: 0
    nodeAffinityPreset:
      key: ''
      type: ''
      values: []
    nodeSelector: {}
    pdb:
      create: true
      maxUnavailable: ''
      minAvailable: ''
    persistence:
      accessModes:
        - xxx
      annotations: {}
      dataSource: {}
      enabled: false
      existingClaim: ''
      labels: {}
      medium: ''
      path: /xxx
      selector: {}
      size: xxx
      sizeLimit: ''
      storageClass: ''
      subPath: ''
      subPathExpr: ''
    persistentVolumeClaimRetentionPolicy:
      enabled: false
      whenDeleted: Retain
      whenScaled: Retain
    podAffinityPreset: ''
    podAnnotations: {}
    podAntiAffinityPreset: soft
    podLabels: {}
    podManagementPolicy: ''
    podSecurityContext:
      enabled: true
      fsGroup: xxx
      fsGroupChangePolicy: Always
      supplementalGroups: []
      sysctls: []
    preExecCmds: []
    priorityClassName: ''
    readinessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
    replicaCount: 3
    resources: {}
    resourcesPreset: nano
    revisionHistoryLimit: 10
    schedulerName: ''
    service:
      annotations: {}
      clusterIP: ''
      externalTrafficPolicy: Cluster
      extraPorts: []
      internalTrafficPolicy: Cluster
      loadBalancerClass: ''
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      nodePorts:
        redis: ''
      ports:
        redis: xxx
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    serviceAccount:
      annotations: {}
      automountServiceAccountToken: false
      create: true
      name: ''
    shareProcessNamespace: false
    sidecars: []
    startupProbe:
      enabled: true
      failureThreshold: 22
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    terminationGracePeriodSeconds: 30
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      type: RollingUpdate
  secretAnnotations: {}
  sentinel:
    annotations: {}
    args: []
    automateClusterRecovery: false
    command: []
    configuration: ''
    containerPorts:
      sentinel: xxx
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      enabled: true
      readOnlyRootFilesystem: true
      runAsGroup: xxx
      runAsNonRoot: true
      runAsUser: xxx
      seLinuxOptions: {}
      seccompProfile:
        type: RuntimeDefault
    customLivenessProbe: {}
    customReadinessProbe: {}
    customStartupProbe: {}
    downAfterMilliseconds: 60000
    enableServiceLinks: true
    enabled: false
    externalMaster:
      enabled: false
      host: ''
      port: xxx
    extraEnvVars: []
    extraEnvVarsCM: ''
    extraEnvVarsSecret: ''
    extraVolumeMounts: []
    extraVolumes: []
    failoverTimeout: 180000
    getMasterTimeout: 90
    image:
      debug: false
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/redis-sentinel
      tag: 7.2.5-debian-12-r4
    lifecycleHooks: {}
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 20
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    masterService:
      annotations: {}
      clusterIP: ''
      enabled: false
      externalTrafficPolicy: ''
      extraPorts: []
      loadBalancerClass: ''
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      nodePorts:
        redis: ''
      ports:
        redis: xxx
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    masterSet: mymaster
    parallelSyncs: 1
    persistence:
      accessModes:
        - xxx
      annotations: {}
      dataSource: {}
      enabled: false
      labels: {}
      medium: ''
      selector: {}
      size: xxx
      sizeLimit: ''
      storageClass: ''
    persistentVolumeClaimRetentionPolicy:
      enabled: false
      whenDeleted: Retain
      whenScaled: Retain
    preExecCmds: []
    quorum: 2
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 20
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
    redisShutdownWaitFailover: true
    resources: {}
    resourcesPreset: nano
    service:
      annotations: {}
      clusterIP: ''
      createMaster: false
      externalTrafficPolicy: Cluster
      extraPorts: []
      headless:
        annotations: {}
      loadBalancerClass: ''
      loadBalancerIP: ''
      loadBalancerSourceRanges: []
      nodePorts:
        redis: ''
        sentinel: ''
      ports:
        redis: xxx
        sentinel: xxx
      sessionAffinity: None
      sessionAffinityConfig: {}
      type: ClusterIP
    startupProbe:
      enabled: true
      failureThreshold: 22
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    terminationGracePeriodSeconds: 30
  serviceAccount:
    annotations: {}
    automountServiceAccountToken: false
    create: true
    name: ''
  serviceBindings:
    enabled: false
  sysctl:
    command: []
    enabled: false
    image:
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/os-shell
      tag: 12-debian-12-r27
    mountHostSys: false
    resources: {}
    resourcesPreset: nano
  tls:
    authClients: true
    autoGenerated: false
    certCAFilename: ''
    certFilename: ''
    certKeyFilename: ''
    certificatesSecret: ''
    dhParamsFilename: ''
    enabled: false
    existingSecret: ''
  useExternalDNS:
    additionalAnnotations: {}
    annotationKey: external-dns.alpha.kubernetes.io/
    enabled: false
    suffix: ''
  useHostnames: true
  volumePermissions:
    containerSecurityContext:
      runAsUser: 0
      seLinuxOptions: {}
    enabled: false
    image:
      digest: ''
      pullPolicy: IfNotPresent
      pullSecrets: []
      registry: docker.io
      repository: bitnami/os-shell
      tag: 12-debian-12-r27
    resources: {}
    resourcesPreset: nano
replicaCount: 1
resources: {}
securityContext: {}
service:
  annotations: {}
  loadBalancerIP: ''
  nodePort: null
  port: 8080
  type: ClusterIP
startupProbe:
  enabled: true
  failureThreshold: 30
  initialDelaySeconds: 30
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 5
tolerations: null
global:
  cattle:
    clusterId: local
    clusterName: xxx
    rkePathPrefix: ''
    rkeWindowsPathPrefix: ''
    systemProjectId: xxx
    url: https://xxx

Xenon-777 avatar Jul 30 '25 08:07 Xenon-777

You have two options as this is a breaking change from upstream chart.

  • Force upgrade on helm
  • Delete statefulset without cascading (this will not stop the DB) and thena pply the chart.

varet80 avatar Jul 31 '25 03:07 varet80

The delete statfulset sounds a little less invasive than a force. I agree that it wouldn't be a bad idea to set the deployment from Nextcloud to Replica 0 in order to interrupt database connections, right?

Xenon-777 avatar Jul 31 '25 05:07 Xenon-777

if you Delete the Statefulset with --cascade=false the POD of mariadb will continue running. and will not interrupt your setup

varet80 avatar Aug 02 '25 02:08 varet80

To all that follow hear ... both not work

delete the statefulset will stock the update in generation of the statefulset object forever force make the same error massage as without force

i will try to replica nextcloud and database to 0, delete the statefulset then, and update then in the next day's. Now i have no time for, sorry.

PS: i thing i have read for years in a update log from kubernetes that stateful and replica SET shout not use manually. But i can be wrong. But if i see how much trouble manually statefulset make, not only here also in another scenario, i think i have read it. imo

Xenon-777 avatar Aug 05 '25 06:08 Xenon-777

OK ... this work

So ... the step are:

  • put the replica of nextcloud deply to 0
  • put the replica of nextcloud-mariadb statfulset to 0
  • look if all pods are gone, only the redis pods are there
  • upgrade with helm

after this starts the update correction scripts: su -s /bin/sh www-data -c "php occ recognize:download-models" su -s /bin/sh www-data -c "php occ db:add-missing-columns" su -s /bin/sh www-data -c "php occ db:add-missing-indices" su -s /bin/sh www-data -c "php occ db:add-missing-primary-keys"

then wait 5-10 min and go to the cloud-side, you will see a error. wait another 5-10 min and all is OK.

The Data also is there, the statfulset connect correct to the exist PVC

Xenon-777 avatar Aug 06 '25 09:08 Xenon-777

Hello, thanks for the proposed solution. But I don't understand how/why scaling down the statefulset to 0 will prevent the error from Helm upgrade? Do I need to delete the statefulset as well?

dbutti avatar Aug 07 '25 07:08 dbutti

Hi first you must understood that Kubernetes is highly abstracted. The speaking is of 5-6 abstraction layer. With Rancher there are 2 more. So everything in Kubernetes is a object. So ,from the Kubernetes Perspective, the statefullset is a object, the pod is a object and the persistent volume claim is a object. Every object in Kubernetes have rules about there Attributes. Which Type have the Attribute and is it changeable or not, for exactable. So if you make a statfullset-object this generate a pod-object and control this, and the pod-object generate the containers and give the containers the information of his Storage on the base of the persistent volume claim object. So the problem is. if i delete the statfulset-object and let the helm generate a new one, the new statfulset-objcet see the existed pod but have not generate it and can not control it anymore and stuck. if i force update the helm chart it make no different. the rules are absolute and in the helm update are change in statfulset-object that not allow. My fare was if i delete the Pod to and the helm chart generate all new it also generate a new clean persistent volume claim with no data or the pod didn't find the exist persistent volume claim anymore and can't get the exists data. So to set the replica to 0 i say "you are a exist Object but for now you have no pod to manage but you know the PVC". The mechanism is that if i put it back to replica 1 i say "you have now a Pod to manage and know the PVC for it". If i delete the statfulset-object after i set it to replica 0 my hop was i say "there was a object with no pod to manage but it have the information of the PVC so if you generate a new statfulset-object instead of the old you know the info of the PVC to"

Xenon-777 avatar Aug 08 '25 06:08 Xenon-777

Thanks for your detailed explanation. So, to be absoltuely clear, will you confirm please: I must not delete the statefulset, only scale it down to 0 replicas, right?

dbutti avatar Aug 08 '25 08:08 dbutti

No, you must delete the statfulset, the change in the update of the statfulset is in kubernetes not allowed. And you must be sure that the statfulset have no pod to manage in the moment you delete it.

The question was how do i delete the statfulset and the pod without lose the connect to the exists PVC. But the fear was apparently obsolete.

The main problem is the update of the attributes in the statfulset that is not allowed to change.

Xenon-777 avatar Aug 11 '25 05:08 Xenon-777

I think this is needed with 11.4 in the env, but can't be set without removing the stateful set :-(

- name: MARIADB_ENABLE_SSL
  value: 'no'

I also allow myself to relist the steps proposed by @Xenon-777:

So ... the step are:

  • put the replica of nextcloud deply to 0
  • put the replica of nextcloud-mariadb statfulset to 0
  • delete the mariadb stateful set
  • look if all pods are gone, only the redis pods are there
  • upgrade with helm

ThumbGen avatar Aug 29 '25 07:08 ThumbGen