kubespray
kubespray copied to clipboard
crio: Unknown option overlay.mountopt
Environment:
-
Cloud provider or hardware configuration: Baremetal
-
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Debian 11 (bullseye), Linux 5.10.0-21-amd64 x86_64 -
Version of Ansible (
ansible --version
): ansible [core 2.12.10] -
Version of Python (
python --version
): 3.10.6
Kubespray version (commit) (git rev-parse --short HEAD
): c4346e590 (v2.21.0)
Network plugin used: Calico
My container engine is crio. kubespray sets the option mountopt = "nodev,metacopy=on"
in the section [storage.options.overlay]
in /etc/containers/storage.conf
. When crio is restarted by kubespray or manually, it fails with the following error message:
Mar 17 12:42:06 fs6 crio[1695519]: time="2023-03-17 12:42:06.297820199+01:00" level=fatal msg="validating root config: failed to get store to set defaults: unknown option overlay.mountopt"
If I uncomment the line in /etc/containers/storage.conf
, everything works fine. My crio version is is 1.25.6.
I am using the zfs storage driver for crio and if I change [storage.options.overlay]
to [storage.options.zfs]
, it works.
So I think the issue is that the value of {{ crio_storage_driver }}
is not respected when writing /etc/containers/storage.conf
.
I had the same issue with crio
+ zfs
.
As described in previous https://github.com/kubernetes-sigs/kubespray/issues/9898#issuecomment-1473735677, I solved this issue by take the value of {{ crio_storage_driver }}
.
See: https://github.com/oxmie/kubespray/commit/36c5663370ee436039bd4ca0afdf3ab7614b6a8e
These changes will generate following result, if crio_storage_driver: "zfs"
is selected.
# cat /etc/containers/storage.conf
[storage]
driver = "zfs"
graphroot = "/var/lib/containers/storage"
[storage.options.zfs]
mountopt = "nodev,metacopy=on"
Had a similar, uneadable error when trying to bootstrap a cluster on zfsroot. I think @oxmie 's commit above should do the trick? Or do we just rely on https://github.com/openzfs/zfs/issues/8648#issuecomment-1452448356 happening eventually?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@oxmie would you make a PR of your commit ?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.