gpu-operator icon indicating copy to clipboard operation
gpu-operator copied to clipboard

kata-nvidia-gpu runtime pod return failed to create containerd task: failed to create shim task: Failed to Check if grpc server is working: ttrpc: closed: unknown

Open garygan89 opened this issue 9 months ago • 6 comments

Envirnoment Check

Testing is done on a bare metal single master kubeadm bootstraped cluster.

  • Ubuntu 22.04.4 LTS
  • VFIO is setup properly in host
  • Driver installed

Kubernetes Cluster

$ kubectl get nodes -o wide
NAME             STATUS   ROLES           AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
fecp-edge-sqa2   Ready    control-plane   169m   v1.30.0   192.168.50.49   <none>        Ubuntu 22.04.5 LTS   6.8.0-52-generic   containerd://1.6.8.2

GPU Operator helm values.yaml. Notable changes:

  • ccmanager enabled,
  • cdi enabled
  • katamanger enabled
  • driver enabled
# Default values for gpu-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

platform:
  openshift: false

nfd:
  enabled: true
  nodefeaturerules: false

psa:
  enabled: false

cdi:
  enabled: true
  default: true

sandboxWorkloads:
  enabled: true
  defaultWorkload: "container"

hostPaths:
  # rootFS represents the path to the root filesystem of the host.
  # This is used by components that need to interact with the host filesystem
  # and as such this must be a chroot-able filesystem.
  # Examples include the MIG Manager and Toolkit Container which may need to
  # stop, start, or restart systemd services
  rootFS: "/"

  # driverInstallDir represents the root at which driver files including libraries,
  # config files, and executables can be found.
  driverInstallDir: "/run/nvidia/driver"

daemonsets:
  labels: {}
  annotations: {}
  priorityClassName: system-node-critical
  tolerations:
  - key: nvidia.com/gpu
    operator: Exists
    effect: NoSchedule
  # configuration for controlling update strategy("OnDelete" or "RollingUpdate") of GPU Operands
  # note that driver Daemonset is always set with OnDelete to avoid unintended disruptions
  updateStrategy: "RollingUpdate"
  # configuration for controlling rolling update of GPU Operands
  rollingUpdate:
    # maximum number of nodes to simultaneously apply pod updates on.
    # can be specified either as number or percentage of nodes. Default 1.
    maxUnavailable: "1"

validator:
  repository: nvcr.io/nvidia/cloud-native
  image: gpu-operator-validator
  # If version is not specified, then default is to use chart.AppVersion
  #version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  args: []
  resources: {}
  plugin:
    env:
      - name: WITH_WORKLOAD
        value: "false"

operator:
  repository: nvcr.io/nvidia
  image: gpu-operator
  # If version is not specified, then default is to use chart.AppVersion
  #version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  priorityClassName: system-node-critical
  runtimeClass: nvidia
  use_ocp_driver_toolkit: false
  # cleanup CRD on chart un-install
  cleanupCRD: false
  # upgrade CRD on chart upgrade, requires --disable-openapi-validation flag
  # to be passed during helm upgrade.
  upgradeCRD: true
  initContainer:
    image: cuda
    repository: nvcr.io/nvidia
    version: 12.6.3-base-ubi9 # 12.8.1-base-ubi9
    imagePullPolicy: IfNotPresent
  tolerations:
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"
  annotations:
    openshift.io/scc: restricted-readonly
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: "node-role.kubernetes.io/master"
                operator: In
                values: [""]
        - weight: 1
          preference:
            matchExpressions:
              - key: "node-role.kubernetes.io/control-plane"
                operator: In
                values: [""]
  logging:
    # Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano')
    timeEncoding: epoch
    # Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
    level: info
    # Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn)
    # Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error)
    develMode: false
  resources:
    limits:
      cpu: 500m
      memory: 350Mi
    requests:
      cpu: 200m
      memory: 100Mi

mig:
  strategy: single

driver:
  enabled: true
  nvidiaDriverCRD:
    enabled: false
    deployDefaultCR: true
    driverType: gpu
    nodeSelector: {}
  kernelModuleType: "auto"

  # NOTE: useOpenKernelModules has been deprecated and made no-op. Please use kernelModuleType instead.
  # useOpenKernelModules: false

  # use pre-compiled packages for NVIDIA driver installation.
  # only supported for as a tech-preview feature on ubuntu22.04 kernels.
  usePrecompiled: false
  repository: nvcr.io/nvidia
  image: driver
  version: "570.86.15"  # "570.124.06"
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  startupProbe:
    initialDelaySeconds: 60
    periodSeconds: 10
    # nvidia-smi can take longer than 30s in some cases
    # ensure enough timeout is set
    timeoutSeconds: 60
    failureThreshold: 120
  rdma:
    enabled: false
    useHostMofed: false
  upgradePolicy:
    # global switch for automatic upgrade feature
    # if set to false all other options are ignored
    autoUpgrade: true
    # how many nodes can be upgraded in parallel
    # 0 means no limit, all nodes will be upgraded in parallel
    maxParallelUpgrades: 1
    # maximum number of nodes with the driver installed, that can be unavailable during
    # the upgrade. Value can be an absolute number (ex: 5) or
    # a percentage of total nodes at the start of upgrade (ex:
    # 10%). Absolute number is calculated from percentage by rounding
    # up. By default, a fixed value of 25% is used.'
    maxUnavailable: 25%
    # options for waiting on pod(job) completions
    waitForCompletion:
      timeoutSeconds: 0
      podSelector: ""
    # options for gpu pod deletion
    gpuPodDeletion:
      force: false
      timeoutSeconds: 300
      deleteEmptyDir: false
    # options for node drain (`kubectl drain`) before the driver reload
    # this is required only if default GPU pod deletions done by the operator
    # are not sufficient to re-install the driver
    drain:
      enable: false
      force: false
      podSelector: ""
      # It's recommended to set a timeout to avoid infinite drain in case non-fatal error keeps happening on retries
      timeoutSeconds: 300
      deleteEmptyDir: false
  manager:
    image: k8s-driver-manager
    repository: nvcr.io/nvidia/cloud-native
    # When choosing a different version of k8s-driver-manager, DO NOT downgrade to a version lower than v0.6.4
    # to ensure k8s-driver-manager stays compatible with gpu-operator starting from v24.3.0
    version: v0.8.0
    imagePullPolicy: IfNotPresent
    env:
      - name: ENABLE_GPU_POD_EVICTION
        value: "true"
      - name: ENABLE_AUTO_DRAIN
        value: "false"
      - name: DRAIN_USE_FORCE
        value: "false"
      - name: DRAIN_POD_SELECTOR_LABEL
        value: ""
      - name: DRAIN_TIMEOUT_SECONDS
        value: "0s"
      - name: DRAIN_DELETE_EMPTYDIR_DATA
        value: "false"
  env: []
  resources: {}
  # Private mirror repository configuration
  repoConfig:
    configMapName: ""
  # custom ssl key/certificate configuration
  certConfig:
    name: ""
  # vGPU licensing configuration
  licensingConfig:
    configMapName: ""
    nlsEnabled: true
  # vGPU topology daemon configuration
  virtualTopology:
    config: ""
  # kernel module configuration for NVIDIA driver
  kernelModuleConfig:
    name: ""

toolkit:
  enabled: true
  repository: nvcr.io/nvidia/k8s
  image: container-toolkit
  version: v1.17.5-ubuntu20.04
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}
  installDir: "/usr/local/nvidia"

devicePlugin:
  enabled: true
  repository: nvcr.io/nvidia
  image: k8s-device-plugin
  version: v0.17.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  args: []
  env:
    - name: PASS_DEVICE_SPECS
      value: "true"
    - name: FAIL_ON_INIT_ERROR
      value: "true"
    - name: DEVICE_LIST_STRATEGY
      value: envvar
    - name: DEVICE_ID_STRATEGY
      value: uuid
    - name: NVIDIA_VISIBLE_DEVICES
      value: all
    - name: NVIDIA_DRIVER_CAPABILITIES
      value: all
  resources: {}
  # Plugin configuration
  # Use "name" to either point to an existing ConfigMap or to create a new one with a list of configurations(i.e with create=true).
  # Use "data" to build an integrated ConfigMap from a set of configurations as
  # part of this helm chart. An example of setting "data" might be:
  # config:
  #   name: device-plugin-config
  #   create: true
  #   data:
  #     default: |-
  #       version: v1
  #       flags:
  #         migStrategy: none
  #     mig-single: |-
  #       version: v1
  #       flags:
  #         migStrategy: single
  #     mig-mixed: |-
  #       version: v1
  #       flags:
  #         migStrategy: mixed
  config:
    # Create a ConfigMap (default: false)
    create: false
    # ConfigMap name (either existing or to create a new one with create=true above)
    name: ""
    # Default config name within the ConfigMap
    default: ""
    # Data section for the ConfigMap to create (i.e only applies when create=true)
    data: {}
  # MPS related configuration for the plugin
  mps:
    # MPS root path on the host
    root: "/run/nvidia/mps"

# standalone dcgm hostengine
dcgm:
  # disabled by default to use embedded nv-hostengine by exporter
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: dcgm
  version: 4.1.1-2-ubuntu22.04
  imagePullPolicy: IfNotPresent
  args: []
  env: []
  resources: {}

dcgmExporter:
  enabled: false
  repository: nvcr.io/nvidia/k8s
  image: dcgm-exporter
  version: 4.1.1-4.0.4-ubuntu22.04
  imagePullPolicy: IfNotPresent
  env:
    - name: DCGM_EXPORTER_LISTEN
      value: ":9400"
    - name: DCGM_EXPORTER_KUBERNETES
      value: "true"
    - name: DCGM_EXPORTER_COLLECTORS
      value: "/etc/dcgm-exporter/dcp-metrics-included.csv"
  resources: {}
  serviceMonitor:
    enabled: false
    interval: 15s
    honorLabels: false
    additionalLabels: {}
    relabelings: []
    # - source_labels:
    #     - __meta_kubernetes_pod_node_name
    #   regex: (.*)
    #   target_label: instance
    #   replacement: $1
    #   action: replace
  # DCGM Exporter configuration
  # This block is used to configure DCGM Exporter to emit a customized list of metrics.
  # Use "name" to either point to an existing ConfigMap or to create a new one with a
  # list of configurations (i.e with create=true).
  # When pointing to an existing ConfigMap, the ConfigMap must exist in the same namespace as the release.
  # The metrics are expected to be listed under a key called `dcgm-metrics.csv`.
  # Use "data" to build an integrated ConfigMap from a set of custom metrics as
  # part of the chart. An example of some custom metrics are shown below. Note that
  # the contents of "data" must be in CSV format and be valid DCGM Exporter metric configurations.
  config:
    name: custom-dcgm-exporter-metrics
    create: false
    #data: |-
      # Format
      # If line starts with a '#' it is considered a comment
      # DCGM FIELD, Prometheus metric type, help message

      # Clocks
      # DCGM_FI_DEV_SM_CLOCK,  gauge, SM clock frequency (in MHz).
      # DCGM_FI_DEV_MEM_CLOCK, gauge, Memory clock frequency (in MHz).
gfd:
  enabled: true
  repository: nvcr.io/nvidia
  image: k8s-device-plugin
  version: v0.17.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env:
    - name: GFD_SLEEP_INTERVAL
      value: 60s
    - name: GFD_FAIL_ON_INIT_ERROR
      value: "true"
  resources: {}

migManager:
  enabled: true
  repository: nvcr.io/nvidia/cloud-native
  image: k8s-mig-manager
  version: v0.12.0-ubuntu20.04
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env:
    - name: WITH_REBOOT
      value: "false"
  resources: {}
  # MIG configuration
  # Use "name" to either point to an existing ConfigMap or to create a new one with a list of configurations(i.e with create=true).
  # Use "data" to build an integrated ConfigMap from a set of configurations as
  # part of this helm chart. An example of setting "data" might be:
  # config:
  #   name: custom-mig-parted-configs
  #   create: true
  #   data:
  #     config.yaml: |-
  #       version: v1
  #       mig-configs:
  #         all-disabled:
  #           - devices: all
  #             mig-enabled: false
  #         custom-mig:
  #           - devices: [0]
  #             mig-enabled: false
  #           - devices: [1]
  #              mig-enabled: true
  #              mig-devices:
  #                "1g.10gb": 7
  #           - devices: [2]
  #             mig-enabled: true
  #             mig-devices:
  #               "2g.20gb": 2
  #               "3g.40gb": 1
  #           - devices: [3]
  #             mig-enabled: true
  #             mig-devices:
  #               "3g.40gb": 1
  #               "4g.40gb": 1
  config:
    default: "all-disabled"
    # Create a ConfigMap (default: false)
    create: false
    # ConfigMap name (either existing or to create a new one with create=true above)
    name: ""
    # Data section for the ConfigMap to create (i.e only applies when create=true)
    data: {}
  gpuClientsConfig:
    name: ""

nodeStatusExporter:
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: gpu-operator-validator
  # If version is not specified, then default is to use chart.AppVersion
  #version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  resources: {}

gds:
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: nvidia-fs
  version: "2.20.5"
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  args: []

gdrcopy:
  enabled: false
  repository: nvcr.io/nvidia/cloud-native
  image: gdrdrv
  version: "v2.4.4"
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  args: []

vgpuManager:
  enabled: false
  repository: ""
  image: vgpu-manager
  version: ""
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}
  driverManager:
    image: k8s-driver-manager
    repository: nvcr.io/nvidia/cloud-native
    # When choosing a different version of k8s-driver-manager, DO NOT downgrade to a version lower than v0.6.4
    # to ensure k8s-driver-manager stays compatible with gpu-operator starting from v24.3.0
    version: v0.8.0
    imagePullPolicy: IfNotPresent
    env:
      - name: ENABLE_GPU_POD_EVICTION
        value: "false"
      - name: ENABLE_AUTO_DRAIN
        value: "false"

vgpuDeviceManager:
  enabled: true
  repository: nvcr.io/nvidia/cloud-native
  image: vgpu-device-manager
  version: v0.3.0
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  config:
    name: ""
    default: "default"

vfioManager:
  enabled: true
  repository: nvcr.io/nvidia
  image: cuda
  version: 12.8.1-base-ubi9
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}
  driverManager:
    image: k8s-driver-manager
    repository: nvcr.io/nvidia/cloud-native
    # When choosing a different version of k8s-driver-manager, DO NOT downgrade to a version lower than v0.6.4
    # to ensure k8s-driver-manager stays compatible with gpu-operator starting from v24.3.0
    version: v0.8.0
    imagePullPolicy: IfNotPresent
    env:
      - name: ENABLE_GPU_POD_EVICTION
        value: "false"
      - name: ENABLE_AUTO_DRAIN
        value: "false"

kataManager:
  enabled: true
  config:
    artifactsDir: "/opt/nvidia-gpu-operator/artifacts/runtimeclasses"
    runtimeClasses:
      - name: kata-nvidia-gpu
        nodeSelector: {}
        artifacts:
          url: nvcr.io/nvidia/cloud-native/kata-gpu-artifacts:ubuntu22.04-535.54.03
          pullSecret: ""
      - name: kata-nvidia-gpu-snp
        nodeSelector:
          "nvidia.com/cc.capable": "true"
        artifacts:
          url: nvcr.io/nvidia/cloud-native/kata-gpu-artifacts:ubuntu22.04-535.86.10-snp
          pullSecret: ""
  repository: nvcr.io/nvidia/cloud-native
  image: k8s-kata-manager
  version: v0.2.3
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env: []
  resources: {}

sandboxDevicePlugin:
  enabled: true
  repository: nvcr.io/nvidia
  image: kubevirt-gpu-device-plugin
  version: v1.3.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  args: []
  env: []
  resources: {}

ccManager:
  enabled: true
  defaultMode: "off"
  repository: nvcr.io/nvidia/cloud-native
  image: k8s-cc-manager
  version: v0.1.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  env:
    - name: CC_CAPABLE_DEVICE_IDS
      value: "0x2339,0x2331,0x2330,0x2324,0x2322,0x233d"
  resources: {}

node-feature-discovery:
  enableNodeFeatureApi: true
  priorityClassName: system-node-critical
  gc:
    enable: true
    replicaCount: 1
    serviceAccount:
      name: node-feature-discovery
      create: false
  worker:
    serviceAccount:
      name: node-feature-discovery
      # disable creation to avoid duplicate serviceaccount creation by master spec below
      create: false
    tolerations:
    - key: "node-role.kubernetes.io/master"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
    - key: "node-role.kubernetes.io/control-plane"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
    - key: nvidia.com/gpu
      operator: Exists
      effect: NoSchedule
    config:
      sources:
        pci:
          deviceClassWhitelist:
          - "02"
          - "0200"
          - "0207"
          - "0300"
          - "0302"
          deviceLabelFields:
          - vendor
  master:
    serviceAccount:
      name: node-feature-discovery
      create: true
    config:
      extraLabelNs: ["nvidia.com"]
      # noPublish: false
      # resourceLabels: ["nvidia.com/feature-1","nvidia.com/feature-2"]
      # enableTaints: false
      # labelWhiteList: "nvidia.com/gpu"

So far I've ensured all pods in gpu-operator and confidential-computing-system is running fine. Also VFIO is setup properly in the host and used by the nvidia-vfio-manager pod.

$ kubectl -n gpu-operator get pods 
NAME                                                              READY   STATUS      RESTARTS      AGE
gpu-operator-1742180705-node-feature-discovery-gc-547477cdztclv   1/1     Running     1 (97m ago)   105m
gpu-operator-1742180705-node-feature-discovery-master-b679qnbz7   1/1     Running     1 (97m ago)   105m
gpu-operator-1742180705-node-feature-discovery-worker-hkfkw       1/1     Running     1 (97m ago)   105m
gpu-operator-f66dc846-kb2lq                                       1/1     Running     1 (97m ago)   101m
nvidia-cuda-validator-89wvh                                       0/1     Completed   0             105m
nvidia-kata-manager-vll2q                                         1/1     Running     1 (97m ago)   100m
nvidia-sandbox-device-plugin-daemonset-wwvh8                      1/1     Running     0             96m
nvidia-sandbox-validator-dk599                                    1/1     Running     0             96m
nvidia-vfio-manager-7lppb                                         1/1     Running     0             100m

I use the same CoCO operator version v0.7.0 as written in nVidia Kata Manager setup guide. Using any version later than v0.7.0 such as the latest version v0.12.0 will NOT deploy those kata binaries with the right folder hierarchy like kata release, but only a few files that I'm unsure it's purpose.

kubectl -n confidential-containers-system get pods 
NAME                                              READY   STATUS    RESTARTS   AGE
cc-operator-controller-manager-576c9c79bf-9lf9n   2/2     Running   0          14m
cc-operator-daemon-install-dft6l                  1/1     Running   0          13m
cc-operator-pre-install-daemon-js5p6              1/1     Running   0          13m

Also the binaries are deployed by cc-operator-daemon-install pod after creating the CCRuntime CR.

$ tree /opt/confidential-containers
/opt/confidential-containers
├── bin
│   ├── cloud-hypervisor
│   ├── containerd
│   ├── containerd-shim-kata-v2
│   ├── kata-collect-data.sh
│   ├── kata-monitor
│   ├── kata-runtime
│   ├── qemu-system-x86_64
│   ├── qemu-system-x86_64-snp-experimental
│   └── qemu-system-x86_64-tdx
├── libexec
│   └── virtiofsd
├── runtime-rs
│   └── bin
│       └── containerd-shim-kata-v2
└── share
    ├── bash-completion
    │   └── completions
    │       └── kata-runtime
    ├── defaults
    │   └── kata-containers
    │       ├── configuration-clh-tdx.toml
    │       ├── configuration-clh.toml
    │       ├── configuration-dragonball.toml
    │       ├── configuration-qemu-nvidia-gpu.toml
    │       ├── configuration-qemu-se.toml
    │       ├── configuration-qemu-sev.toml
    │       ├── configuration-qemu-snp.toml
    │       ├── configuration-qemu-tdx.toml
    │       ├── configuration-qemu.toml
    │       ├── configuration-remote.toml
    │       └── configuration.toml -> configuration-qemu.toml

To eliminate the error due to GPU passthrough, I've commented out any part that uses GPU, but still same result.

apiVersion: v1                                                                 
kind: Pod                                                                      
metadata:                                                                      
  name: cuda-vectoradd-kata                                                    
  annotations:                                                                 
#    cdi.k8s.io/gpu: "nvidia.com/pgpu=0"                                        
    io.katacontainers.config.hypervisor.default_memory: "1000"                 
spec:                                                                          
  runtimeClassName: kata-nvidia-gpu # kata-qemu-nvidia-gpu                     
  restartPolicy: OnFailure                                                     
  containers:                                                                  
  - name: cuda-vectoradd                                                       
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"   
#    resources:                                                                
#      limits:                                                                 
#        "nvidia.com/GA107_GEFORCE_RTX_3050_8GB": 1                            

Kubelet pod report

  Warning  FailedCreatePodSandBox  93s (x25 over 3m33s)  kubelet            Failed to create pod sandbox: rpc error: 
code = Unknown desc = failed to create containerd task: failed to create shim task: Failed to Check if grpc server is
 working: ttrpc: closed: unknown                                                                                     

Logs from containerd via journalctl -f -u containerd -t kata after enabling kata debug mode in hypervisor, agent and runtime.

From the logs it's clear the that error start to occur after the line VM Started. Normally kata-agent process in kata should have started to print some logs.

三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982526736+08:00" level=info msg="VM started" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox

See https://pastebin.com/vwJbXmWB for the full log due to max char size limitation in this post.

 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.921114492+08:00" level=info msg="loaded configuration" file=/opt/nvidia-gpu-operator/artifacts/runtimeclasses/kata-nvidia-gpu/configuration-kata-qemu-nvidia-gpu.toml format=TOML name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=katautils
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.921211895+08:00" level=info msg="IOMMUPlatform is disabled by default." name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=katautils
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.921229589+08:00" level=debug default-kernel-parameters= name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=katautils
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.921739139+08:00" level=debug msg="container rootfs: /run/containerd/io.containerd.runtime.v2.task/k8s.io/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/rootfs" source=virtcontainers subsystem=oci
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.921768589+08:00" level=info msg="shm-size detected: 67108864" source=virtcontainers subsystem=oci
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.923277557+08:00" level=warning msg="Could not add /dev/mshv to the devices cgroup" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=cgroups
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.924048395+08:00" level=debug msg="restore sandbox failed" error="open /run/vc/sbs/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/persist.json: no such file or directory" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.924166636+08:00" level=debug msg="Creating bridges" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.924200992+08:00" level=debug msg="Creating UUID" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.924428161+08:00" level=debug msg="Disable nesting environment checks" inside-vm=false name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.924617458+08:00" level=info msg="Set selinux=0 to kernel params because SELinux on the guest is disabled" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.92646753+08:00" level=info msg="adding volume" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu volume-type=virtio-fs
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.926994502+08:00" level=info msg="veth interface found" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.927028463+08:00" level=info msg="Attaching endpoint" endpoint-type=virtual hotplug=false name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.927055094+08:00" level=info msg="connect TCFilter to VM network" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.928831384+08:00" level=info msg="endpoints found after scan" endpoints="[0xc00027c600]" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.928871809+08:00" level=debug msg="Endpoints added" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.943483178+08:00" level=info msg="Starting VM" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.943587144+08:00" level=debug default-kernel-parameters="tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 console=hvc0 console=hvc1 debug" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.94367444+08:00" level=info msg="created vm path" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu vm path=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.944001601+08:00" level=info name=containerd-shim-v2 path=/opt/confidential-containers/libexec/virtiofsd pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=virtiofsd
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.944037223+08:00" level=info args="--syslog --cache=auto --shared-dir=/run/kata-containers/shared/sandboxes/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/shared --fd=3 --thread-pool-size=1 --announce-submounts" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=virtiofsd
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.944648942+08:00" level=info msg="Adding extra file [0xc0001222c0 0xc0001221b0 0xc000122290 0xc000122298 0xc000122270 0xc000122280]" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qmp
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.944720475+08:00" level=info msg="launching /opt/confidential-containers/bin/qemu-system-x86_64 with: [-name sandbox-4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 -uuid 605185b9-792d-4ab6-91ce-9c3416216f74 -machine q35,accel=kvm -cpu host,pmu=off -qmp unix:fd=3,server=on,wait=off -monitor unix:path=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/hmp.sock,server=on,wait=off -m 1000M,slots=10,maxmem=32856M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=false,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock,server=on,wait=off -device virtio-scsi-pci,id=scsi0,disable-modern=false -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device pcie-root-port,id=rp0,bus=pcie.0,chassis=0,slot=0,multifunction=off,pref64-reserve=17179869184B,mem-reserve=67108864B -device pcie-root-port,id=rp1,bus=pcie.0,chassis=0,slot=1,multifunction=off,pref64-reserve=17179869184B,mem-reserve=67108864B -device vhost-vsock-pci,disable-modern=false,vhostfd=4,id=vsock-1360354645,guest-cid=1360354645 -chardev socket,id=char-a829248e0b6bf1c6,path=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-a829248e0b6bf1c6,tag=kataShared,queue-size=1024 -netdev tap,id=network-0,vhost=on,vhostfds=5:6,fds=7:8 -device driver=virtio-net-pci,netdev=network-0,mac=86:9e:99:6a:11:60,disable-modern=false,mq=on,vectors=6 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -object memory-backend-file,id=dimm1,size=1000M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /opt/nvidia-gpu-operator/artifacts/runtimeclasses/kata-nvidia-gpu/vmlinuz-5.19.2-109-nvidia-gpu -initrd /opt/nvidia-gpu-operator/artifacts/runtimeclasses/kata-nvidia-gpu/kata-ubuntu-jammy-nvidia-gpu.initrd -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k cryptomgr.notests net.ifnames=0 pci=lastbus=0 console=hvc0 console=hvc1 debug panic=1 nr_cpus=20 selinux=0 scsi_mod.scan=none agent.log=debug agent.log=debug initcall_debug -pidfile /run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/pid -smp 2,cores=1,threads=1,sockets=20,maxcpus=20]" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qmp
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.945505207+08:00" level=info msg="Start logging QEMU (qemuPid=1348203)" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.958283025+08:00" level=info msg="QMP details" name=containerd-shim-v2 pid=1348190 qmp-Capabilities=oob qmp-major-version=7 qmp-micro-version=0 qmp-minor-version=1 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qemu
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.981949579+08:00" level=info msg="scanner return error: read unix @->/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/qmp.sock: use of closed network connection" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers/hypervisor subsystem=qmp
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982048825+08:00" level=info msg="hypervisor pid is 1348203" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982409951+08:00" level=info msg="already added" endpoint=eth0 name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982470975+08:00" level=info msg="already added" endpoint=tap0_kata name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.98248591+08:00" level=info msg="endpoints found after scan" endpoints="[0xc00027c600]" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982514015+08:00" level=debug msg="Endpoints added" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=network
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982526736+08:00" level=info msg="VM started" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982538817+08:00" level=debug msg="console watcher starts" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.982599664+08:00" level=info msg="New client" name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=kata_agent url="vsock://1360354645:1024"
 三  21 18:08:59 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:08:59.98261795+08:00" level=debug msg="custom dialing timeout has been set" name=agent-client pid=1348190 source=agent-client timeout=45
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.448389953+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.189677] brd: module loaded"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.448501667+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.189806] initcall brd_init+0x0/0x100 returned 0 after 3323 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.448516887+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.189890] calling  loop_init+0x0/0xe9 @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450000864+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191365] loop: module loaded"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450077921+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191467] initcall loop_init+0x0/0xe9 returned 0 after 1524 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.45013672+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191542] calling  virtio_blk_init+0x0/0x78 @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450306613+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191614] initcall virtio_blk_init+0x0/0x78 returned 0 after 16 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.45037667+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191770] calling  nd_pmem_driver_init+0x0/0x15 @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.45045867+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191844] initcall nd_pmem_driver_init+0x0/0x15 returned 0 after 5 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450513414+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191920] calling  nd_btt_init+0x0/0x6 @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450584435+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.191976] initcall nd_btt_init+0x0/0x6 returned -6 after 0 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450641149+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.192047] calling  virtio_pmem_driver_init+0x0/0xc @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450727553+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.192106] initcall virtio_pmem_driver_init+0x0/0xc returned 0 after 2 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.450841615+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.192189] calling  virtio_scsi_init+0x0/0xc4 @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.451758307+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.193143] scsi host0: Virtio SCSI HBA"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.452385425+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.193759] initcall virtio_scsi_init+0x0/0xc4 returned 0 after 1513 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.452432958+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.193843] calling  init_sd+0x0/0x12c @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.452568603+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.193943] initcall init_sd+0x0/0x12c returned 0 after 60 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.452616125+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.194021] calling  blackhole_netdev_init+0x0/0x77 @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.45275463+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.194081] initcall blackhole_netdev_init+0x0/0x77 returned 0 after 4 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.45278725+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.194172] calling  veth_init+0x0/0xc @ 1"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.452831638+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.194214] initcall veth_init+0x0/0xc returned 0 after 1 usecs"
 三  21 18:09:00 fecp-edge-sqa2 kata[1348190]: time="2025-03-21T18:09:00.452886193+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1/console.sock name=containerd-shim-v2 pid=1348190 sandbox=4bc17593ed4a1e7fada3e1839fdf4679e73f3bcd90121b1da67a889a3020c6b1 source=virtcontainers subsystem=sandbox vmconsole="[    0.194286] calling  virtio_net_driver_init+0x0/0x91 @ 1"
...
...
...

garygan89 avatar Mar 21 '25 10:03 garygan89