1. Issue or feature description
2. Steps to reproduce the issue
docker build -t nvidia/k8s-device-plugin:1.0.0-beta6 https://github.com/NVIDIA/k8s-device-plugin.git#1.0.0-beta6
give this error
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/docker-build-git043282582/docker/ubuntu16.04: no such file or directory
3. Information to attach (optional if deemed irrelevant)
Common error checking:
- [ ] The output of
nvidia-smi -a on your host
Sun May 31 10:33:28 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P6000 Off | 00000000:09:00.0 Off | Off |
| 26% 23C P8 8W / 250W | 0MiB / 24449MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
- [ ] Your docker configuration file (e.g:
/etc/docker/daemon.json)
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
- [ ] The k8s-device-plugin container logs
Where are these .log files?
- [ ] The kubelet logs on the node (e.g:
sudo journalctl -r -u kubelet)
May 31 10:36:45 fabricnode2 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
May 31 10:36:45 fabricnode2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 31 10:36:45 fabricnode2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 31 10:36:45 fabricnode2 kubelet[26465]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 31 10:36:45 fabricnode2 kubelet[26465]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 31 10:36:45 fabricnode2 kubelet[26465]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 31 10:36:45 fabricnode2 kubelet[26465]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.712811 26465 server.go:417] Version: v1.18.3
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.713065 26465 plugins.go:100] No cloud provider specified.
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.713094 26465 server.go:837] Client rotation is on, will bootstrap in background
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.720273 26465 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752471 26465 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752780 26465 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752790 26465 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752886 26465 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752892 26465 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752896 26465 container_manager_linux.go:306] Creating device plugin manager: true
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752959 26465 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.752969 26465 client.go:92] Start docker client with request timeout=2m0s
May 31 10:36:45 fabricnode2 kubelet[26465]: W0531 10:36:45.756511 26465 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.756523 26465 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.779326 26465 docker_service.go:253] Docker cri networking managed by cni
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.784033 26465 docker_service.go:258] Docker Info: &{ID:WC7E:33ND:PA6M:PDI5:3NBZ:KG5Z:H36Z:PJJH:5KWN:XPOP:DJ6E:3HXS Containers:16 ContainersRunning:6 ContainersPaused:0 ContainersStopped:10 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:60 SystemTime:2020-05-31T10:36:45.780129397-06:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-101-generic OperatingSystem:Ubuntu 18.04 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002083f0 NCPU:12 MemTotal:8197746688 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:fabricnode2 Labels:[] ExperimentalBuild:false ServerVersion:19.03.6 ClusterStore: ClusterAdvertise: Runtimes:map[nvidia:{Path:/usr/bin/nvidia-container-runtime Args:[]} runc:{Path:runc Args:[]}] DefaultRuntime:nvidia Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]}
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.784134 26465 docker_service.go:271] Setting cgroupDriver to cgroupfs
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789164 26465 remote_runtime.go:59] parsed scheme: ""
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789178 26465 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789203 26465 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789209 26465 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789239 26465 remote_image.go:50] parsed scheme: ""
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789244 26465 remote_image.go:50] scheme "" not registered, fallback to default scheme
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789251 26465 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789255 26465 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789284 26465 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
May 31 10:36:45 fabricnode2 kubelet[26465]: I0531 10:36:45.789301 26465 kubelet.go:317] Watching apiserver
May 31 10:36:51 fabricnode2 kubelet[26465]: E0531 10:36:51.933532 26465 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
May 31 10:36:51 fabricnode2 kubelet[26465]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.955459 26465 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.6, apiVersion: 1.40.0
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.955818 26465 server.go:1125] Started kubelet
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.955860 26465 server.go:145] Starting to listen on 0.0.0.0:10250
May 31 10:36:51 fabricnode2 kubelet[26465]: E0531 10:36:51.955922 26465 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.957246 26465 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.957411 26465 server.go:393] Adding debug handlers to kubelet server.
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.957547 26465 volume_manager.go:265] Starting Kubelet Volume Manager
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.957856 26465 desired_state_of_world_populator.go:139] Desired state populator starts to run
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.966176 26465 clientconn.go:106] parsed scheme: "unix"
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.966188 26465 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.966239 26465 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.966245 26465 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.969305 26465 status_manager.go:158] Starting to sync pod status with apiserver
May 31 10:36:51 fabricnode2 kubelet[26465]: I0531 10:36:51.969325 26465 kubelet.go:1821] Starting kubelet main sync loop.
May 31 10:36:51 fabricnode2 kubelet[26465]: E0531 10:36:51.969357 26465 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
May 31 10:36:51 fabricnode2 kubelet[26465]: W0531 10:36:51.983544 26465 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "nvidia-device-plugin-daemonset-58d5l_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f1f74d88ff302a40e9fdcc9e102f2686158783d4bb08e93ca75026ae6a0c6901"
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.033544 26465 cpu_manager.go:184] [cpumanager] starting with none policy
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.033555 26465 cpu_manager.go:185] [cpumanager] reconciling every 10s
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.033563 26465 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.033652 26465 state_mem.go:88] [cpumanager] updated default cpuset: ""
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.033658 26465 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.033665 26465 policy_none.go:43] [cpumanager] none policy: Start
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.034494 26465 plugin_manager.go:114] Starting Kubelet Plugin Manager
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.035265 26465 manager.go:411] Got registration request from device plugin with resource name "nvidia.com/gpu"
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.035397 26465 endpoint.go:179] parsed scheme: ""
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.035404 26465 endpoint.go:179] scheme "" not registered, fallback to default scheme
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.035416 26465 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/device-plugins/nvidia.sock 0 }] }
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.035421 26465 clientconn.go:933] ClientConn switching balancer to "pick_first"
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.057690 26465 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.067848 26465 kubelet_node_status.go:70] Attempting to register node fabricnode2
May 31 10:36:52 fabricnode2 kubelet[26465]: W0531 10:36:52.069415 26465 pod_container_deletor.go:77] Container "e51239d582656193da0f45c038f01a25924ea5d62a7014b7b2c6ab03ebc68e4f" not found in pod's containers
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.069453 26465 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.078773 26465 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.079378 26465 kubelet_node_status.go:112] Node fabricnode2 was previously registered
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.087446 26465 kubelet_node_status.go:73] Successfully registered node fabricnode2
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.113200 26465 topology_manager.go:233] [topologymanager] Topology Admit Handler
May 31 10:36:52 fabricnode2 kubelet[26465]: W0531 10:36:52.121775 26465 pod_container_deletor.go:77] Container "f1f74d88ff302a40e9fdcc9e102f2686158783d4bb08e93ca75026ae6a0c6901" not found in pod's containers
May 31 10:36:52 fabricnode2 kubelet[26465]: W0531 10:36:52.121800 26465 pod_container_deletor.go:77] Container "7d9b48cd220bc73e8b6662dda0993af466c0583cd92fb44054f2af4dc65e4387" not found in pod's containers
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.159375 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/95594e9a-e0b2-43ea-b965-4e2b00aadd2a-kube-proxy") pod "kube-proxy-fxsxw" (UID: "95594e9a-e0b2-43ea-b965-4e2b00aadd2a")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259551 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-64vmx" (UniqueName: "kubernetes.io/secret/95594e9a-e0b2-43ea-b965-4e2b00aadd2a-kube-proxy-token-64vmx") pod "kube-proxy-fxsxw" (UID: "95594e9a-e0b2-43ea-b965-4e2b00aadd2a")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259573 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-xtables-lock") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259590 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "device-plugin" (UniqueName: "kubernetes.io/host-path/9f37a24d-13e6-46b5-9eff-25a080485eef-device-plugin") pod "nvidia-device-plugin-daemonset-58d5l" (UID: "9f37a24d-13e6-46b5-9eff-25a080485eef")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259622 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-2bwnv" (UniqueName: "kubernetes.io/secret/a8350f54-fe40-4dcb-b54c-4a3e46254170-calico-node-token-2bwnv") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259697 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-var-lib-calico") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259727 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-cni-net-dir") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259751 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-host-local-net-dir") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259775 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-flexvol-driver-host") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259798 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/95594e9a-e0b2-43ea-b965-4e2b00aadd2a-xtables-lock") pod "kube-proxy-fxsxw" (UID: "95594e9a-e0b2-43ea-b965-4e2b00aadd2a")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259834 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/95594e9a-e0b2-43ea-b965-4e2b00aadd2a-lib-modules") pod "kube-proxy-fxsxw" (UID: "95594e9a-e0b2-43ea-b965-4e2b00aadd2a")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259858 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-lib-modules") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259871 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-cni-bin-dir") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259920 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-var-run-calico") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259946 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/a8350f54-fe40-4dcb-b54c-4a3e46254170-policysync") pod "calico-node-45ll5" (UID: "a8350f54-fe40-4dcb-b54c-4a3e46254170")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259970 26465 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-bthvt" (UniqueName: "kubernetes.io/secret/9f37a24d-13e6-46b5-9eff-25a080485eef-default-token-bthvt") pod "nvidia-device-plugin-daemonset-58d5l" (UID: "9f37a24d-13e6-46b5-9eff-25a080485eef")
May 31 10:36:52 fabricnode2 kubelet[26465]: I0531 10:36:52.259984 26465 reconciler.go:157] Reconciler: start to sync state
Additional information that might help better understand your environment and reproduce the bug:
- [ ] Docker version from
docker version
$ docker version
Client:
Version: 19.03.6
API version: 1.40
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Fri Feb 28 23:45:43 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.6
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: 369ce74a3c
Built: Wed Feb 19 01:06:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.3-0ubuntu1~18.04.2
GitCommit:
nvidia:
Version: spec: 1.0.1-dev
GitCommit:
docker-init:
Version: 0.18.0
GitCommit:
-
[ ] Docker command, image and tag used
-
[ ] Kernel version from uname -a
Linux fabricnode2 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
-
[ ] Any relevant kernel output lines from dmesg
-
[ ] NVIDIA packages version from dpkg -l '*nvidia*' or rpm -qa '*nvidia*'
dpkg -l 'nvidia'
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=======================================-========================-========================-====================================================================================
un libgldispatch0-nvidia (no description available)
ii libnvidia-cfg1-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVIDIA binary OpenGL/GLX configuration library
un libnvidia-cfg1-any (no description available)
un libnvidia-common (no description available)
ii libnvidia-common-440 440.82-0ubuntu0~0.18.04. all Shared files used by the NVIDIA libraries
ii libnvidia-compute-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVIDIA libcompute package
ii libnvidia-compute-440:i386 440.82-0ubuntu0~0.18.04. i386 NVIDIA libcompute package
ii libnvidia-container-tools 1.1.1-1 amd64 NVIDIA container runtime library (command-line tools)
ii libnvidia-container1:amd64 1.1.1-1 amd64 NVIDIA container runtime library
un libnvidia-decode (no description available)
ii libnvidia-decode-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVIDIA Video Decoding runtime libraries
ii libnvidia-decode-440:i386 440.82-0ubuntu0~0.18.04. i386 NVIDIA Video Decoding runtime libraries
un libnvidia-encode (no description available)
ii libnvidia-encode-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVENC Video Encoding runtime library
ii libnvidia-encode-440:i386 440.82-0ubuntu0~0.18.04. i386 NVENC Video Encoding runtime library
un libnvidia-extra (no description available)
ii libnvidia-extra-440:amd64 440.82-0ubuntu0~0.18.04. amd64 Extra libraries for the NVIDIA driver
un libnvidia-fbc1 (no description available)
ii libnvidia-fbc1-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii libnvidia-fbc1-440:i386 440.82-0ubuntu0~0.18.04. i386 NVIDIA OpenGL-based Framebuffer Capture runtime library
un libnvidia-gl (no description available)
ii libnvidia-gl-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
ii libnvidia-gl-440:i386 440.82-0ubuntu0~0.18.04. i386 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
un libnvidia-ifr1 (no description available)
ii libnvidia-ifr1-440:amd64 440.82-0ubuntu0~0.18.04. amd64 NVIDIA OpenGL-based Inband Frame Readback runtime library
ii libnvidia-ifr1-440:i386 440.82-0ubuntu0~0.18.04. i386 NVIDIA OpenGL-based Inband Frame Readback runtime library
un libnvidia-ml1 (no description available)
un nvidia-304 (no description available)
un nvidia-340 (no description available)
un nvidia-384 (no description available)
un nvidia-390 (no description available)
un nvidia-common (no description available)
ii nvidia-compute-utils-440 440.82-0ubuntu0~0.18.04. amd64 NVIDIA compute utilities
ii nvidia-container-runtime 3.2.0-1 amd64 NVIDIA container runtime
un nvidia-container-runtime-hook (no description available)
ii nvidia-container-toolkit 1.1.1-1 amd64 NVIDIA container runtime hook
ii nvidia-dkms-440 440.82-0ubuntu0~0.18.04. amd64 NVIDIA DKMS package
un nvidia-dkms-kernel (no description available)
un nvidia-docker (no description available)
ii nvidia-docker2 2.3.0-1 all nvidia-docker CLI wrapper
ii nvidia-driver-440 440.82-0ubuntu0~0.18.04. amd64 NVIDIA driver metapackage
un nvidia-driver-binary (no description available)
un nvidia-kernel-common (no description available)
ii nvidia-kernel-common-440 440.82-0ubuntu0~0.18.04. amd64 Shared files used with the kernel module
un nvidia-kernel-source (no description available)
ii nvidia-kernel-source-440 440.82-0ubuntu0~0.18.04. amd64 NVIDIA kernel source package
un nvidia-legacy-304xx-vdpau-driver (no description available)
un nvidia-legacy-340xx-vdpau-driver (no description available)
un nvidia-opencl-icd (no description available)
un nvidia-persistenced (no description available)
ii nvidia-prime 0.8.8.2 all Tools to enable NVIDIA's Prime
ii nvidia-settings 440.64-0ubuntu0~0.18.04. amd64 Tool for configuring the NVIDIA graphics driver
un nvidia-settings-binary (no description available)
un nvidia-smi (no description available)
un nvidia-utils (no description available)
ii nvidia-utils-440 440.82-0ubuntu0~0.18.04. amd64 NVIDIA driver support binaries
un nvidia-vdpau-driver (no description available)
ii xserver-xorg-video-nvidia-440 440.82-0ubuntu0~0.18.04. amd64 NVIDIA binary Xorg driver
-
[ ] NVIDIA container library version from nvidia-container-cli -V
nvidia-container-cli -V
version: 1.1.1
build date: 2020-05-19T15:15+00:00
build revision: e5d6156aba457559979597c8e3d22c5d8d0622db
build compiler: x86_64-linux-gnu-gcc-7 7.5.0
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections
-
[ ] NVIDIA container library logs (see troubleshooting)