helm-charts icon indicating copy to clipboard operation
helm-charts copied to clipboard

[kimai] Volume cannot be attached and configuration changes are not applied

Open rhizoet opened this issue 2 years ago • 9 comments

We have deployed kimai in the latest version on Kubernetes.

Problem

Now, for example, if we adjust the image.tag value to the current version, the new pod cannot be created because it tries to attach the volume. Unfortunately, this does not work because the volume is still attached to the currently running pod. Only when the ReplicaSet is deleted from the old pod, the new pod can attach the volume and start.

Additionally, changes in the configuration value are not applied when doing a helm upgrade. The pod is not even rebuilt. Changing within the pod is also not possible because the file system is read-only.

What should it look like

The existing pod should release the volume so it can be attached to the new pod. Then the pod can also start.

With each helm upgrade the value configuration should be read in again, so that changes are taken along. Currently the DB and Kimai must be reinitialized, which is not a solution if there is data in it.

rhizoet avatar Jul 06 '23 14:07 rhizoet

The volume Problem can be easily overcome by setting

updateStrategy:
  type: Recreate

What will delete the pod, recreate it and attach the volume

or by setting

podAffinityPreset: hard

what should force the pod to be created on the same node


I need to take a look on the configuration file problem, but for sure you don't need to delete your database.

robjuz avatar Jul 06 '23 15:07 robjuz

Any news on the configuration file problem? I cannot update a chart if I had set the configuration: |- value. The Pod does not recognize the change. This is important especially for the saml config.

rhizoet avatar Jul 18 '23 08:07 rhizoet

I'm on holidays.

Have you tried to delete the pod?

robjuz avatar Jul 18 '23 08:07 robjuz

Ah okay, happy holidays.

I've deleted the pod several times. But that does not change anything.

rhizoet avatar Jul 18 '23 08:07 rhizoet

Any news on this?

rhizoet avatar Sep 11 '23 13:09 rhizoet

I updated the chart recently. Please try the latest version

W dniu pon., 11.09.2023 o 15:55 Marius @.***> napisał(a):

Any news on this?

— Reply to this email directly, view it on GitHub https://github.com/robjuz/helm-charts/issues/62#issuecomment-1713939514, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACFLPUCQGGN7JK6YS56ZPWLXZ4J3ZANCNFSM6AAAAAA2AQXVIE . You are receiving this because you commented.Message ID: @.***>

robjuz avatar Sep 11 '23 14:09 robjuz

No changes on the configuration problem. I've updated the title of the saml part in the configuration: |- but nothing changed. Or should I do it now in another way?

rhizoet avatar Sep 18 '23 08:09 rhizoet

Could you provide some more info about your infrastructure? And maybe a simplified version of your deployment process. Thx.

robjuz avatar Sep 18 '23 13:09 robjuz

Sure, I can do it with pleasure:

We run Kimai on a K8s cluster with version 1.26.9.

Deployment is done with helm via the console. For this we give a values.yaml with the following content:

kimaiAppSecret: secret
kimaiAdminEmail: [email protected]
kimaiAdminPassword: password
ingress:
    enabled: true
    annotations:
        kubernetes.io/ingress.class: nginx
        cert-manager.io/cluster-issuer: letsencrypt-prod
    hostname: kimai.example.com
    tls: true
updateStrategy:
    type: Recreate
configuration: |-
    kimai:
      user:
        registration: false
      saml:
        provider: zitadel
        activate: true
        title: Login with auth
        mapping:
          - { saml: $Email, kimai: email }
          - { saml: $FirstName $SurName, kimai: alias }
        roles:
          resetOnLogin: true
          attribute: Roles
          mapping:
            - { saml: Admin, kimai: ROLE_ADMIN }
            - { saml: Management, kimai: ROLE_TEAMLEAD }
        connection:
          idp:
            entityId: "https://auth.example.com/saml/v2/metadata"
            singleSignOnService:
              url: "https://auth.example.com/saml/v2/SSO"
              binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
            x509cert: "CERT"
          sp:
            entityId: "https://kimai.example.com/"
            assertionConsumerService:
              url: "https://kimai.example.com/auth/saml/acs"
              binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
            singleLogoutService:
              url: "https://kimai.example.com/auth/saml/logout"
              binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
          baseurl: "https://kimai.example.com/auth/saml/"
          strict: false
          debug: true
          security:
            nameIdEncrypted: false
            authnRequestsSigned: false
            logoutRequestSigned: false
            logoutResponseSigned: false
            wantMessagesSigned: false
            wantAssertionsSigned: false
            wantNameIdEncrypted: false
            requestedAuthnContext: true
            signMetadata: false
            wantXMLValidation: true
            signatureAlgorithm: "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"
            digestAlgorithm: "http://www.w3.org/2001/04/xmlenc#sha256"
          contactPerson:
            technical:
              givenName: "Kimai Admin"
              emailAddress: "[email protected]"
            support:
              givenName: "Kimai Support"
              emailAddress: "[email protected]"
          organization:
            en:
              name: "kimai"
              displayname: "Kimai"
              url: "https://kimai.example.com"

Then we run helm upgrade -i kimai -f values.yaml --create-namespace -n kimai robjuz/kimai2.

The K8s cluster itself runs on an OpenStack, which we also run ourselves. We have our own data center.

rhizoet avatar Sep 18 '23 14:09 rhizoet