minikube
minikube copied to clipboard
`minikube mount` does not follow symlinks
Premise: my workspace makes use of symlinks to make available the (below mentioned) Projects directory; it has something like ln -s /Users/Shared/Projects /Users/myuser/Projects/
Steps to reproduce the issue:
0. there's already a service deployed
1. minikube start
(performs automatically the minikube mount
)
😄 minikube v1.15.1 on Darwin 10.14.6 ▪ MINIKUBE_ACTIVE_DOCKERD=minikube ✨ Using the virtualbox driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🔄 Restarting existing virtualbox VM for "minikube" ... 🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ... 🔎 Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
2. minikube ssh -- ls -la /Users/myuser/Projects
returns
lrwxr-xr-x 1 docker docker 18 Aug 30 2019 /Users/myuser/Projects -> ../Shared/Projects
👍 The symlink was followed correctly.
Unfortunately, by default, minikube mount
s the hosthome (/Users
on macOS) using ownership from the minikube's docker
user, while my service needs a user app
.
This causes broken file permissions (in the pod's container of the running service) when trying to kubectl exec
commands that need to execute & write inside the containers (this because the project folder is mounted as docker:docker
instead of app:app
).
3. minikube mount /Users:/Users --uid 61234 --gid 61234
should solve the above issue, allowing to specify UID
and GID
of the mounting user (the numbers correspond to app:app
).
BUT
after performing the fix to the mount..
4. minikube ssh -- ls -la /Users/myuser/Projects/
returns
ls: cannot access '/Users/myuser/Projects/': Not a directory
5. minikube ssh -- ls -la /Users/myuser/Projects
returns
-rwxr-xr-x 1 61234 61234 18 Aug 30 2019 /Users/myuser/Projects
😖 The symlink functionality has been lost.
It says that it is not a directory, while it should follow the symlink and allow the mount to access its content.
Even more unfortunate is that, after terminating the manual minikube mount
command, the mount is not being restored to the default automatic mount.
Off-Topic, but maybe --uid
and --gid
should be supported directly on the minikube start
command, so that the correct user is handled automatically.
Workaround: it is possible to mount directly my project folder, executing from it, with
minikube mount $(pwd):$(pwd) --uid 61234 --gid 61234
to skip the need of the symlink.
However I repute this a bug and I thought it made sense to report it in an issue, to get that fixed. Even though there's a workaround, this is not ideal because it requires the user to always remember to manually mount and because it makes the automatic behaviour inconsistent (hence unpredictable) and buggy (when closing the manual mount).
/kind support
Additionally:
when I kill the process that keeps the workaround-manual-mount working, the permissions aren't being restored to the default one (docker:docker
) but the access to the folder is lost completely ⚠️.
minikube ssh -- ls -la /Users/myuser/Projects/namespace/myproject
returns
d????????? ? ? ? ? ? myproject
And any attempt to fix or re-mount manually with the workaround command returns:
Input/output error
The only way I found to restore the permissions is to minikube stop
/minikube start
or restart the VM.
Hey @Pictor13 sorry for the delay in responding to your issue. I'm not very familiar with minikube mount
, but I did find this issue which seems like it could be related: #4621
Looks like some users found a workaround, I wonder if this would work for you as well? https://github.com/kubernetes/minikube/issues/4621#issuecomment-506463147
@Pictor13 Did you try taking a look at the related issues linked above?
Hi @Pictor13, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
Hi @spowelljr , hi @priyawadhwa ,
sorry I've missed the Github notifications.
Yes, I've looked at the linked issue. The workaround solved the resolution of symlinks.
So at least I can run the minikube mount
command from any directory on my host machine.
However, as mentioned above, this is not a viable long-term solution since it requires the user to always remember to manually mount the volume.
Forgetting it will provoke errors on the pods that are not directly relatable to this Minikube issue, leading the user to long/confused debugging sessions.
As commented in the linked issue, the workaround actually provokes other issues with permissions.
Also, having to keep a terminal open for keeping the mount active doesn't play nicely with suspending the host machine or the virtual machine, for later resume of work.
Additionally this invalidates one of the main features of Minikube that is the auto-mount of host volume.
The parameter --9p-version 9p2000.u
is hardly rememberable and error prone.
If the driver used is bugged, then I believe Minikube should provide that version by default.
The workaround in #4621 is from 2019; in my opinion it is not ideal to still need that, after 2 years. Especially given that in the same issue it is mentioned how the use of the parameter to fix this wrong behaviour is an actual regression.
Last (personal) pain point is how time-wasting it is to figure that minikube mount
is the responsible for seemingly unrelated filesystem problems (usually I first assume I am wrong; rather than the software I use).
Having to debug, plus dig into the many issues here on Github in order to figure out a workaround, can become a costly procedure (supposedly not just for me).
Finally, I tried to mount the volume manually via:
minikube mount --uid 61234 --gid 61234 --9p-version 9p2000.u $(pwd):$(pwd)
but the owner:group is still set to docker:docker
.
I believe the minikube mount
needs a general revision, to improve quality, reliability & predictability.
My report is just about use on macOS; there might be additional issues when running on Windows or using hypervisor (I didn't search also for those on the issue tracker).
- Are these information + rationale enough for making the issue actionable?
In case, please, re-open the issue. - Do my (bold) points make sense also to you? Or am I being too exigent?
- Do you plan to work on the resolution of these problems with
minikube mount
or to implement different mounting solutions?
Thank you for your persistent work though.
/reopen
@Pictor13: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
A lot of the issues we've experienced with auto-mounting comes down to issues with 9p, which we can't really do much about. We've explored other options for mounting, but haven't had the capacity to implement something new. We'd be happy to review a PR that does this though.
I totally understand. And I'd love to have enough Go & k8s & 9p knowledge (and enough bandwidth) to help with a PR. But I have to stay realistic 🙁
Would minikube team be able to redact a list of functioning workarounds or official recommendations, in the documentation, as long as there's no fix?
In any case, to help for future action I think it might make sense to mention what exact project/repo provides the 9p implementation.
(I also read in #3796 that there is either a Client and a Server to update).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle frozen
oops I misread that spowelljr's marking frozen as the k8s robot didn't read close enough sorry
This is still an issue. Trying to set back to frozen, assuming that is the proper tag for it:
/add-lifecycle frozen