kubespray icon indicating copy to clipboard operation
kubespray copied to clipboard

crio: Unknown option overlay.mountopt

Open timonegk opened this issue 1 year ago • 9 comments

Environment:

  • Cloud provider or hardware configuration: Baremetal

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"): Debian 11 (bullseye), Linux 5.10.0-21-amd64 x86_64

  • Version of Ansible (ansible --version): ansible [core 2.12.10]

  • Version of Python (python --version): 3.10.6

Kubespray version (commit) (git rev-parse --short HEAD): c4346e590 (v2.21.0)

Network plugin used: Calico

My container engine is crio. kubespray sets the option mountopt = "nodev,metacopy=on" in the section [storage.options.overlay] in /etc/containers/storage.conf. When crio is restarted by kubespray or manually, it fails with the following error message:

Mar 17 12:42:06 fs6 crio[1695519]: time="2023-03-17 12:42:06.297820199+01:00" level=fatal msg="validating root config: failed to get store to set defaults: unknown option overlay.mountopt"

If I uncomment the line in /etc/containers/storage.conf, everything works fine. My crio version is is 1.25.6.

timonegk avatar Mar 17 '23 11:03 timonegk

I am using the zfs storage driver for crio and if I change [storage.options.overlay] to [storage.options.zfs], it works.

timonegk avatar Mar 17 '23 12:03 timonegk

So I think the issue is that the value of {{ crio_storage_driver }} is not respected when writing /etc/containers/storage.conf.

timonegk avatar Mar 17 '23 12:03 timonegk

I had the same issue with crio + zfs.

As described in previous https://github.com/kubernetes-sigs/kubespray/issues/9898#issuecomment-1473735677, I solved this issue by take the value of {{ crio_storage_driver }}.

See: https://github.com/oxmie/kubespray/commit/36c5663370ee436039bd4ca0afdf3ab7614b6a8e

These changes will generate following result, if crio_storage_driver: "zfs" is selected.

# cat /etc/containers/storage.conf

[storage]
driver = "zfs"
graphroot = "/var/lib/containers/storage"
[storage.options.zfs]
mountopt = "nodev,metacopy=on"

oxmie avatar May 10 '23 13:05 oxmie

Had a similar, uneadable error when trying to bootstrap a cluster on zfsroot. I think @oxmie 's commit above should do the trick? Or do we just rely on https://github.com/openzfs/zfs/issues/8648#issuecomment-1452448356 happening eventually?

blackliner avatar Sep 12 '23 15:09 blackliner

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 28 '24 04:01 k8s-triage-robot

@oxmie would you make a PR of your commit ?

VannTen avatar Feb 07 '24 21:02 VannTen

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 08 '24 21:03 k8s-triage-robot

/remove-lifecycle rotten

timonegk avatar Mar 08 '24 21:03 timonegk

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 06 '24 22:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 06 '24 22:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 05 '24 23:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 05 '24 23:08 k8s-ci-robot