kubespray icon indicating copy to clipboard operation
kubespray copied to clipboard

v2.18.1 installing on ubuntu 20 break docker-ce package

Open itshikanov opened this issue 2 years ago • 7 comments

What would you like to be added:

  • change containerd_cfg_dir to another location like /usr/local/containerd/config.toml
  • use another location in config.toml: [grpc] address = "/another_location/containerd.sock"
  • use another socker location in kubelet --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock this help to do not break docker-ce package

Why is this needed: We used docker-ce and k8s(containerd) in parralel without problems on v2.17.1 I'm update kubespray to v2.18.1 and install k8s. Containerd and docker-ce stopped working.

itshikanov avatar Apr 07 '22 13:04 itshikanov

Could you explain the rationale for having both docker-ce (which has its own containerd) and the upstream containerd on the same host?

If you need docker beyond kubernetes 1.24 then you can use the cri-dockerd which is now supported in kubespray and will be officially released in 2.19 and you can still deploy with docker as the container manager in 2.18.x.

cristicalin avatar Apr 07 '22 15:04 cristicalin

We were changed container_manager in k8s to containerd and switch log parsers to containerd format in v2.17.1.

I'm just want to not break docker-ce(dpkg) when installing k8s. May be through option "use_apt_containerd". May be through another config and socket location.

in v2.17.1 containerd installed through apt and docker-ce work well with k8s(containerd).

itshikanov avatar Apr 08 '22 08:04 itshikanov

Ran into similar issue with 2.18.0: After deploying k8s using 'containerd' as container runtime with Kubespray 2.8.0, /usr/bin/dockerd is gone. Is this removal expected? I cannot find explicit Ansible action(s) doing so.

Before the deploy:

~/> ls -la /usr/bin/dockerd
-rwxr-xr-x 1 root root 105108704 Jan 30  2021 /usr/bin/dockerd
~/> sudo systemctl restart docker

...(works)

After the deploy:

~/> ls -la /usr/bin/dockerd
ls: cannot access '/usr/bin/dockerd': No such file or directory
~/> sudo systemctl restart docker
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

vincent-du2020 avatar Apr 15 '22 13:04 vincent-du2020

@vincent-du2020 unless you are referring to 2.18 I don't think 2.8 even had support for containerd at the time.

Note that current master (future 2.19) will remove any container runtime not managed by kubespray, that behavior is currently not covered by a flag to switch it off so you may want to open an enhancement request issue and explain your use case so we can gather more information from the community on weather this kind of behavior (having yet another configuration flag) is something worth the maintenance effort over time.

The code in 2.18 does clean up the docker engine if the container manager is set to `containerd. This is because in 2.18 our containerd is downloaded from upstream git instead of relying on out of date os packages or out of date packages coming from docker and we need to ensure a clean environment when deploying containerd.

cristicalin avatar Apr 16 '22 17:04 cristicalin

@cristicalin: Appreciate your reply. Yes, I did mean 2.18.0 ( sorry about the typo). Thanks for confirming the removal of dockerd when deploying cluster with containerd is intended.

Could you clarify about 'remove any container runtime not managed by kubespray'? For example, If I set 'container_manager' to 'docker' to deploy then later decided to change to 'containerd' or 'cri-o', then 'dockerd' from the previous deployment is considered 'not managed' by Kubespray, correct?

vincent-du2020 avatar Apr 18 '22 16:04 vincent-du2020

Could you clarify about 'remove any container runtime not managed by kubespray'? For example, If I set 'container_manager' to 'docker' to deploy then later decided to change to 'containerd' or 'cri-o', then 'dockerd' from the previous deployment is considered 'not managed' by Kubespray, correct?

Yes, that is correct, kubespray in 2.19 will expect to manage the container manager itself and have just its own container manager running. If you need some other container manager I think you can look into using dind and run it as a pod under the control of kubernetes.

cristicalin avatar Apr 19 '22 06:04 cristicalin

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 18 '22 07:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 17 '22 08:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 16 '22 08:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 16 '22 08:09 k8s-ci-robot