kompose secrets file location
It appears there is a mismatch between where secrets are placed when using kompose vs docker compose
docker compose places secrets in the folder /run/secrets kompose defines the mount path for secrets in the folder /run/secrets/{secret-name}. This is a breaking change if you are using kompose convert to deploy to k8s and need both docker-compose and k8s to work
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
Is there a workaround for this?
@amackillop: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@simbamarufu1: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'm can write a PR to fix this (if I figure out how to run script tests locally), but would it even get merged? It would be a breaking change at this point (some users might be expecting their files to be coming in as directories)
All contributions are welcome! But can you summrize your proposol first? Maybe an example with some explainnations
@hangyan I'd basically change pkg.transformer.kubernetes.kubernetes.go:ConfigSecretVolumes to always set a SubPath for volume mounts.
This should make it so secrets are always mounted as single files (instead of a file inside a directory filled with symlinks).
As far as I can tell, there are two different situations happening here:
- User defined a secret in docker-compose without a targetPath, or with targetPath being just a filename. In this case it ends up mounted as
/run/secrets/<target>/<target>like OP mentioned. This is incompatible with docker-compose which would mount as/run/secrets/<target> - User defined a secret in docker-compose with a targetPath in a subdirectory, or with targetPath being an absolute path. This case already gives the same result as docker-compose. For
<target>=mysubdir/myfileit would create/run/secrets/mysubdir/myfile(technicallymyfilewould be a symlink pointing to some timestamp based directory but that shouldn't be a problem here).
I haven't tested this yet, but the relevant part of ConfigSecretVolumes could look something like this:
...
// default secretConfig.Target to secretConfig.Source, just in case user was using short secret syntax or
// otherwise did not define a specific target
target := secretConfig.Target
if target == "" {
target = secretConfig.Source
}
// if target is an absolute path, set that as the MountPath
var secretMountPath string
if strings.HasPrefix(secretConfig.Target, "/") {
secretMountPath = target
} else {
// if target is a relative path, prefix with "/run/secrets/" to replicate what docker-compose would do
secretMountPath = "/run/secrets/" + target
}
// set subPath to the target filename. this ensures that we end up with a file at our MountPath instead
// of a directory with symlinks (see https://stackoverflow.com/a/68332231)
splitPath := strings.Split(target, "/")
secretFilename := splitPath[len(splitPath)-1]
volSource := api.VolumeSource{
Secret: &api.SecretVolumeSource{
SecretName: secretConfig.Source,
Items: []api.KeyToPath{{
Key: secretConfig.Source,
Path: secretFilename,
}},
},
}
...
volMount := api.VolumeMount{
Name: vol.Name,
MountPath: secretMountPath,
SubPath: secretFilename,
}
...
This would make it so we always mount as files, in the same place as docker-compose, without any symlink shenanigans. It is a breaking change because some people might be expecting this "bug" to be in place, and assuming their files are inside a subdirectory instead of in their correct place.
@campos-ddc your proposal sounds good. To prevent existing installations from breaking, I think this behaviour should be followed only if some cmd flag is passed. That way we can have both set of behaviours peacefully.
@sbs2001 ok, I'll do it that way. I'm going on vacation for a couple of weeks but hopefully I'll have a PR soon.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/remove-lifecycle rotten
@boyvinall: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
On Mon, Feb 7, 2022 at 10:34 AM Kubernetes Prow Robot < @.***> wrote:
@boyvinall https://github.com/boyvinall: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this https://github.com/kubernetes/kompose/issues/1280#issuecomment-1031600797 :
/reopen
Instructions for interacting with me using PR comments are available here https://git.k8s.io/community/contributors/guide/pull-requests.md. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue: repository.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/kompose/issues/1280#issuecomment-1031601166, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANRWFSSN7UB32O73GVRJNFTUZ7RCBANCNFSM4NREOGXQ . You are receiving this because you were mentioned.Message ID: @.***>
@simbamarufu1: Reopened this issue.
In response to this:
/reopen
On Mon, Feb 7, 2022 at 10:34 AM Kubernetes Prow Robot < @.***> wrote:
@boyvinall https://github.com/boyvinall: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this https://github.com/kubernetes/kompose/issues/1280#issuecomment-1031600797 :
/reopen
Instructions for interacting with me using PR comments are available here https://git.k8s.io/community/contributors/guide/pull-requests.md. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue: repository.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/kompose/issues/1280#issuecomment-1031601166, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANRWFSSN7UB32O73GVRJNFTUZ7RCBANCNFSM4NREOGXQ . You are receiving this because you were mentioned.Message ID: @.***>
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale