caCertsExtraVolumePaths should point to the correct distro paths
/kind bug /area security /sig cluster-lifecycle
Versions
kubeadm version (use kubeadm version): 1.15
What happened?
In https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/controlplane/volumes.go#L42, caCertsExtraVolumePaths is an array of common system CA certificate paths. However, for RHEL/Fedora-based systems, the definition of /etc/pki is a higher level directory that also grants access to private keys for daemons following the Fedora packaging standards.
What you expected to happen?
The true CA certificate bundle path for openssl on Fedora-derived systems is /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem.
/etc/pki/ca-trust/extracted/pem/ can also be used for a directory of PEM files, as per the README in that directory:
This directory /etc/pki/ca-trust/extracted/pem/ contains
CA certificate bundle files which are automatically created
based on the information found in the
/usr/share/pki/ca-trust-source/ and /etc/pki/ca-trust/source/
directories.
All files are in the BEGIN/END CERTIFICATE file format,
as described in the x509(1) manual page.
Distrust information cannot be represented in this file format,
and distrusted certificates are missing from these files.
If your application isn't able to load the PKCS#11 module p11-kit-trust.so,
then you can use these files in your application to load a list of global
root CA certificates.
How to reproduce it (as minimally and precisely as possible)?
N/A
Anything else we need to know?
The certificate bundle is derived from the ca-trust store:
In Red Hat Enterprise Linux, the consolidated system-wide trust store is located in the /etc/pki/ca-trust/ and /usr/share/pki/ca-trust-source/ directories. The trust settings in /usr/share/pki/ca-trust-source/ are processed with lower priority than settings in /etc/pki/ca-trust/.
openssl will then consume them from /etc/pki/tls/certs/ca-bundle.crt
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-shared-system-certificates_security-hardening
/good-first-issue /help-wanted
@randomvariable: This request has been marked as suitable for new contributors.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/good-first-issue /help-wanted
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@randomvariable how should we solve this problem. handling distro specific is not preferred.
However, for RHEL/Fedora-based systems, the definition of /etc/pki is a higher level directory that also grants access to private keys for daemons following the Fedora packaging standards.
More can of worms:
All the distros support /etc/ssl/certs, but in most cases, they're symlinks to elsewhere.
So far we have:
- OpenSUSE derivatives (SUSE):
/var/lib/ca-certificates/pem - Debian derivatives (Ubuntu) :
/etc/ssl/certs - Fedora derivatives (RHEL, CentOS, Oracle, Amazon):
/etc/pki/ca-trust/extracted/pem/ - Arch derivatives (Manjaro):
/etc/ca-certificates/extracted/cadir/ - Gentoo:
/etc/ssl/certs - Alpine:
/etc/ssl/certs
I don't think we need to support every distro, but we should support distro families. I would at least consider Debian and Fedora as major families. OpenSUSE is also a downstream consumer of kubeadm as apart of Kubic.
Additionally, all of these distros do have /etc/ssl/certs. it's just that they're symlinked in different ways, sometimes multiply so.
For Fedora derivatives, for example, /etc/ssl/certs will resolve to /etc/pki/tls/certs. This will have a concatenated bundle of certs, but this is being symlinked from /etc/pki/ca-trust/extracted/pem/. I think we'll need to have these stored in the array, probably with /etc/ssl/certs last.
However, for RHEL/Fedora-based systems, the definition of /etc/pki is a higher level directory that also grants access to private keys for daemons following the Fedora packaging standards.
Do you mean there is a risk of read-write conflict between those daemons and container, in scenario of mount /etc/pki to containers?
Nope. A compromise of say kube-controller-manager by some mechanism shouldn't grant an attacker access to the private keys for a colocated haproxy load balancer, which it might be if /etc/pki is mounted instead of the directory that's purely for publishing CAs.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/lifecycle frozen
@randomvariable
currently we are mounting:
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
given the golang search paths:
https://golang.org/src/crypto/x509/root_linux.go
and that AFAIK no distro is supposed to use /etc/pki directly.
should we just replace /etc/pki with /etc/pki/tls/certs?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale