kompose icon indicating copy to clipboard operation
kompose copied to clipboard

[BUG] Multiple secrets for single service result in non-unique "mountPath" error

Open gramian opened this issue 1 year ago • 15 comments

Expected Behavior

Multiple secrets for a single service are correctly translated.

Actual Behavior

"kubectl" reports:

Error from server (Invalid): error when creating "service-deployment.yaml": Deployment.apps "service" is invalid: spec.template.spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/run/secrets": must be unique

Steps To Reproduce

  1. Prepare first secret: echo -n 'password1' > textfile1
  2. Prepare second secret: echo -n 'password2' > textfile2
  3. kompose convert -f compose.yaml
  4. minikube start
  5. kubectl apply -f .

Kompose Version

1.34 (cbf2835db)

Docker-Compose file

name: "test"

secrets:
  "secret1":
    file: "textfile1"
  "secret2":
    file: "textfile2"

services:

  "service":
    image: "hello-world:latest"
    ports:
      - "80:80"
    secrets:
      - "secret1"
      - "secret2"

Anything else?

No response

gramian avatar Jun 05 '24 14:06 gramian

@sosan @gramian Do you think https://github.com/kubernetes/kompose/issues/1877 caused this? / same issue?

cdrage avatar Jun 05 '24 14:06 cdrage

@cdrage @sosan The error is also happening for kompose 1.33, but not in kompose 1.32, if that helps.

gramian avatar Jul 23 '24 14:07 gramian

I think I have an answer to this issue, and it should affect all secrets not only if there are multiple per service, just in this case it produces an error.

Since a workaround seems to be to append the secret's volume subPath to the mountPath, I think the conversion of the Compose file's services/servicename/secrets/target property to the pod's containers/volumeMounts/mountPath needs to be fixed in kompose.

For example, currently in 1.34, if the mountPath becomes /my/secrets then secrets becomes the file holding the secret and not the folder holding the secret.

gramian avatar Jul 25 '24 12:07 gramian

I think I have an answer to this issue, and it should affect all secrets not only if there are multiple per service, just in this case it produces an error.

Since a workaround seems to be to append the secret's volume subPath to the mountPath, I think the conversion of the Compose file's services/servicename/secrets/target property to the pod's containers/volumeMounts/mountPath needs to be fixed in kompose.

For example, currently in 1.34, if the mountPath becomes /my/secrets then secrets becomes the file holding the secret and not the folder holding the secret.

Hi sorry for the late reply and thank you for investigating.

At the moment we do not have enough resources to fix this, but if there is any way you can open a PR, we could review it as well as make any changes / help out with tests if you're willing!

If not no worries, we'll most likely have more resources next month and can work on this.

cdrage avatar Jul 25 '24 16:07 cdrage

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 23 '24 16:10 k8s-triage-robot

/remove-lifecycle stale

gramian avatar Oct 23 '24 18:10 gramian

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 21 '25 19:01 k8s-triage-robot

/remove-lifecycle stale

gramian avatar Jan 30 '25 15:01 gramian

I think I have an answer to this issue, and it should affect all secrets not only if there are multiple per service, just in this case it produces an error. Since a workaround seems to be to append the secret's volume subPath to the mountPath, I think the conversion of the Compose file's services/servicename/secrets/target property to the pod's containers/volumeMounts/mountPath needs to be fixed in kompose. For example, currently in 1.34, if the mountPath becomes /my/secrets then secrets becomes the file holding the secret and not the folder holding the secret.

Hi sorry for the late reply and thank you for investigating.

At the moment we do not have enough resources to fix this, but if there is any way you can open a PR, we could review it as well as make any changes / help out with tests if you're willing!

If not no worries, we'll most likely have more resources next month and can work on this.

wק חודא

I think I have an answer to this issue, and it should affect all secrets not only if there are multiple per service, just in this case it produces an error. Since a workaround seems to be to append the secret's volume subPath to the mountPath, I think the conversion of the Compose file's services/servicename/secrets/target property to the pod's containers/volumeMounts/mountPath needs to be fixed in kompose. For example, currently in 1.34, if the mountPath becomes /my/secrets then secrets becomes the file holding the secret and not the folder holding the secret.

Hi sorry for the late reply and thank you for investigating.

At the moment we do not have enough resources to fix this, but if there is any way you can open a PR, we could review it as well as make any changes / help out with tests if you're willing!

If not no worries, we'll most likely have more resources next month and can work on this.

We just encountered the same issue,
If a fix is not on the way since, I can try to fix it and send a PR :)

realgam3 avatar Feb 21 '25 11:02 realgam3

This https://github.com/kubernetes/kubernetes/issues/50345 seems to be related.

gramian avatar Apr 01 '25 06:04 gramian

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 30 '25 07:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 30 '25 07:07 k8s-triage-robot

/remove-lifecycle rotten

jancespivo avatar Jul 30 '25 07:07 jancespivo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 28 '25 08:10 k8s-triage-robot

/remove-lifecycle stale

jancespivo avatar Oct 28 '25 08:10 jancespivo