Calico operator fails to read `/var/lib/calico/mtu` because of SELinux
Expected Behavior
The operator should (probably) be able to read the file.
Current Behavior
The operator fails to read the file, as SElinux blocks it. I'm not sure if it actually affects anything (except spamming the logs of the container and SELinux), but as I'm using the default MTU of 1500, I probably wouldn't notice.
It logs dozens of lines like this:
{"level":"info","ts":"2023-07-08T16:28:08Z","logger":"controller_installation","msg":"Reconciling Installation.operator.tigera.io","Request.Namespace":"tigera-operator","Request.Name":"default-token-jcg6t"}
{"level":"error","ts":"2023-07-08T16:28:09Z","logger":"controller_installation","msg":"error reading network MTU","Request.Namespace":"tigera-operator","Request.Name":"default-token-jcg6t","reason":"ResourceReadError","error":"open /var/lib/calico/mtu: permission denied","stacktrace":"github.com/tigera/operator/pkg/controller/status.(*statusManager).SetDegraded\n\t/go/src/github.com/tigera/operator/pkg/controller/status/status.go:406\ngithub.com/tigera/operator/pkg/controller/installation.(*ReconcileInstallation).Reconcile\n\t/go/src/github.com/tigera/operator/pkg/controller/installation/core_controller.go:1460\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:323\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235"}
{"level":"error","ts":"2023-07-08T16:28:09Z","msg":"Reconciler error","controller":"tigera-installation-controller","object":{"name":"default-token-jcg6t","namespace":"tigera-operator"},"namespace":"tigera-operator","name":"default-token-jcg6t","reconcileID":"94dd61c3-a61b-4079-bf6f-d0cffe1e63ef","error":"open /var/lib/calico/mtu: permission denied","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235"}
Possible Solution
A simple fix is to apply the following strategic merge patch to the tigera-operator container in the operator deployment, but I'm not sure if that is the optimal solution:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tigera-operator
spec:
template:
spec:
containers:
- name: tigera-operator
securityContext:
privileged: true
selector: {}
Steps to Reproduce (for bugs)
- Install the Calico operator in a cluster with nodes that have SELinux enabled (using the manifest)
- Observe the errors in the log of the operator container
Context
Your Environment
- Calico version: v3.26.1
- Orchestrator version (e.g. kubernetes, mesos, rkt): k3s 1.26.6-k3s1
- Operating System and version: Rocky Linux 8 and 9
- Link to your project (optional):
Alternatively, propagating the chosen MTU via another method than a host mount would remove the need for privilege.
~Do calico manifests also need to enable SELinux?~ I confirmed that calico-node has SELinux enabled: https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
I can be working on this.
Is there an update on this? I'm seeing the following:
SELinux is preventing /usr/local/bin/operator from read access on the file mtu.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that operator should be allowed read access on the mtu file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'operator' --raw | audit2allow -M my-operator
# semodule -X 300 -i my-operator.pp
Additional Information:
Source Context system_u:system_r:container_t:s0:c425,c886
Target Context system_u:object_r:container_var_lib_t:s0
Target Objects mtu [ file ]
Source operator
Source Path /usr/local/bin/operator
Port <Unknown>
Host <redacted>
Source RPM Packages
Target RPM Packages
SELinux Policy RPM selinux-policy-targeted-3.14.3-117.el8_8.3.noarch
Local Policy RPM selinux-policy-targeted-3.14.3-117.el8_8.3.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name <redacted>
Platform Linux <redacted>
4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30
EDT 2021 x86_64 x86_64
Alert Count 2
First Seen 2023-10-30 17:51:40 JST
Last Seen 2023-10-30 17:52:13 JST
Local ID f88d92eb-e6c6-466d-9632-17ee0ef2b90d
Raw Audit Messages
type=AVC msg=audit(1698655933.979:1927): avc: denied { read } for pid=78828 comm="operator" name="mtu" dev="dm-0" ino=68099584 scontext=system_u:system_r:container_t:s0:c425,c886 tcontext=system_u0
Should I allow the access?
Yes, still happening e.g. with k3s when --selinux is enabled.
There are two workarounds available right now:
- allow access to this file in SELinux.
- explicitly configure the MTU in the Installation API, which tells the operator that it doesn't need to detect it from this file.
I believe the correct long-term solution to this is to remove the need for tigera/operator to read that file at all, and instead propagate the MTU to the operator via another means. For example, by writing it back in the Kubernetes API somewhere.
This issue is stale because it is kind/enhancement or kind/bug and has been open for 180 days with no activity.
This issue was closed because it has been inactive for 30 days since being marked as stale.