azurefile-csi-driver
azurefile-csi-driver copied to clipboard
cifs credentials appear in process table
What happened:
the cifs credentials are given as mount process arguments, so they appear in the process table and are recorded by auditing tools https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/54a024d295477f8696a43097a5a50675298038e8/pkg/azurefile/nodeserver.go#L317-L318
the documentation this refers too is also wrong.
What you expected to happen:
no password appearing in the process table, use -o credentials=
instead
How to reproduce it:
Anything else we need to know?:
this is what stackrox finds
mount.cifs //...file.core.windows.net/v-e... /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/8bcfea950f3d7c3c93c5e63dc372a396f60a1b7e3ccf5e7d4422021f3200a/globalmount -o rw,gid=1001930000,file_mode=0777,dir_mode=0777,actimeo=
30,mfsymlinks,username=e...2,password=AxVa7..
Environment:
- CSI Driver version:
- Kubernetes version (use
kubectl version
): v1.25.12 as bundled in OCP 4.12 - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
@freedge
sensitiveMountOptions
is used in k8s smb mount, that's a common practice, it won't appear in the csi driver logs
https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/54a024d295477f8696a43097a5a50675298038e8/pkg/azurefile/nodeserver.go#L343
if you use -o credentials=/path/to/credentials/file
, the password would be stored in the credential file, that's also a security issue.
mount -t cifs //server/share /mnt/mountpoint -o credentials=/path/to/credentials/file
the process table is readable by any user (in the pid namespace) while a file benefits from user permissions and is not recorded by auditing tools. Here it should probably be a file in memory under /run or a pipe file descriptor, created for the duration of the mount call. Or passed through stdin as an alternative.
(some guidelines https://clig.dev/#arguments-and-flags)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale still very much a secret leaking on the command line