gvisor icon indicating copy to clipboard operation
gvisor copied to clipboard

Can't run runsc in kvm platform in VMware

Open terenceli opened this issue 2 years ago • 3 comments

Description

When I use the latest runsc binary in VMware and run it in kvm mode. It can't run. The 'dmesg' shows:

[ 6761.380049] *** Guest State *** [ 6761.380058] CR0: actual=0x0000000080040031, shadow=0x0000000080040031, gh_mask=fffffffffffffff7 [ 6761.380064] CR4: actual=0x0000000000372670, shadow=0x0000000000370630, gh_mask=ffffffffffffe871 [ 6761.380065] CR3 = 0x000000c10046e000 [ 6761.380066] PDPTR0 = 0x0000000000000000 PDPTR1 = 0x0000000000000000 [ 6761.380066] PDPTR2 = 0x0000000000000000 PDPTR3 = 0x0000000000000000 [ 6761.380067] RSP = 0xffff80c00045e100 RIP = 0x0000000000b58ac0 [ 6761.380070] RFLAGS=0x00000002 DR7 = 0x0000000000000400 [ 6761.380071] Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000 [ 6761.380072] CS: sel=0x0010, attr=0x0a099, limit=0xffffffff, base=0x0000000000000000 [ 6761.380073] DS: sel=0x002b, attr=0x080f3, limit=0xffffffff, base=0x0000000000000000 [ 6761.380074] SS: sel=0x0018, attr=0x08093, limit=0xffffffff, base=0x0000000000000000 [ 6761.380075] ES: sel=0x002b, attr=0x080f3, limit=0xffffffff, base=0x0000000000000000 [ 6761.380075] FS: sel=0x002b, attr=0x080f3, limit=0xffffffff, base=0x0000000000000000 [ 6761.380076] GS: sel=0x002b, attr=0x080f3, limit=0xffffffff, base=0x0000000000000000 [ 6761.380077] GDTR: limit=0x00000047, base=0xffff80c00045e120 [ 6761.380078] LDTR: sel=0x0000, attr=0x10000, limit=0x00000000, base=0x0000000000000000 [ 6761.380078] IDTR: limit=0x0000fffe, base=0xffff80c000451000 [ 6761.380079] TR: sel=0x0038, attr=0x0008b, limit=0x00000067, base=0xffff80c00045e220 [ 6761.380082] EFER = 0x0000000000000000 PAT = 0x0007040600070406 [ 6761.380085] DebugCtl = 0x0000000000000000 DebugExceptions = 0x0000000000000000 [ 6761.380088] BndCfgS = 0x0000000000000000 [ 6761.380091] Interruptibility = 00000000 ActivityState = 00000000 [ 6761.380092] *** Host State *** [ 6761.380097] RIP = 0xffffffffc04e18a0 RSP = 0xffffc90003477c28 [ 6761.380113] CS=0010 SS=0018 DS=0000 ES=0000 FS=0000 GS=0000 TR=0040 [ 6761.380121] FSBase=000000c000088090 GSBase=ffff888139c00000 TRBase=fffffe0000003000 [ 6761.380126] GDTBase=fffffe0000001000 IDTBase=fffffe0000000000 [ 6761.380134] CR0=0000000080050033 CR3=00000000498fe001 CR4=00000000003626f0 [ 6761.380141] Sysenter RSP=fffffe0000003000 CS:RIP=0010:ffffffff81c01720 [ 6761.380147] EFER = 0x0000000000000d01 PAT = 0x0407050600070106 [ 6761.380147] *** Control State *** [ 6761.380148] PinBased=0000003f CPUBased=b5986dfa SecondaryExec=001034ea [ 6761.380149] EntryControls=000153ff ExitControls=008befff [ 6761.380156] ExceptionBitmap=00060042 PFECmask=00000000 PFECmatch=00000000 [ 6761.380163] VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000 [ 6761.380164] VMExit: intr_info=00000000 errcode=00000000 ilen=00000000 [ 6761.380165] reason=80000021 qualification=0000000000000000 [ 6761.380165] IDTVectoring: info=00000000 errcode=00000000 [ 6761.380168] TSC Offset = 0xffffffffffffd544 [ 6761.380174] EPT pointer = 0x000000003fb5205e [ 6761.380181] PLE Gap=00000080 Window=00001000 [ 6761.380184] Virtual processor ID = 0x0001

Steps to reproduce

In Ubuntu 18.04.3 LTS Virtual Machine.

  1. Get latest runsc from :

https://storage.googleapis.com/gvisor/releases/release/latest/x86_64/runsc

  1. ./runsc spec

  2. using following spec

{ "ociVersion": "1.0.0", "process": { "user": { "uid": 0, "gid": 0 }, "args": [ "sh" ], "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm" ], "cwd": "/", "capabilities": { "bounding": [ "CAP_AUDIT_WRITE", "CAP_KILL", "CAP_NET_BIND_SERVICE" ], "effective": [ "CAP_AUDIT_WRITE", "CAP_KILL", "CAP_NET_BIND_SERVICE" ], "inheritable": [ "CAP_AUDIT_WRITE", "CAP_KILL", "CAP_NET_BIND_SERVICE" ], "permitted": [ "CAP_AUDIT_WRITE", "CAP_KILL", "CAP_NET_BIND_SERVICE" ] }, "rlimits": [ { "type": "RLIMIT_NOFILE", "hard": 1024, "soft": 1024 } ] }, "root": { "path": "/", "readonly": true }, "hostname": "runsc", "mounts": [ { "destination": "/proc", "type": "proc", "source": "proc" }, { "destination": "/dev", "type": "tmpfs", "source": "tmpfs" }, { "destination": "/sys", "type": "sysfs", "source": "sysfs", "options": [ "nosuid", "noexec", "nodev", "ro" ] } ], "linux": { "namespaces": [ { "type": "pid" }, { "type": "network" }, { "type": "ipc" }, { "type": "uts" }, { "type": "mount" } ] } }

  1. ./runsc --platform=kvm --debug --debug-log=./log/ run abc

runsc version

No response

docker version (if using docker)

No response

uname

Linux ubuntu 5.4.0-107-generic #121~18.04.1-Ubuntu SMP Thu Mar 24 17:21:33 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

kubectl (if using Kubernetes)

No response

repo state (if built from source)

No response

runsc debug logs (if available)

#cat runsc.log.20221216-074725.826849.run.txt
I1216 07:47:25.826887    4269 main.go:216] ***************************
I1216 07:47:25.826924    4269 main.go:217] Args: [./runsc --platform=kvm --debug --debug-log=./log/ run test]
I1216 07:47:25.826938    4269 main.go:218] Version release-20221212.0
I1216 07:47:25.826948    4269 main.go:219] GOOS: linux
I1216 07:47:25.826959    4269 main.go:220] GOARCH: amd64
I1216 07:47:25.826969    4269 main.go:221] PID: 4269
I1216 07:47:25.826981    4269 main.go:222] UID: 0, GID: 0
I1216 07:47:25.826991    4269 main.go:223] Configuration:
I1216 07:47:25.827002    4269 main.go:224] 		RootDir: /var/run/runsc
I1216 07:47:25.827012    4269 main.go:225] 		Platform: kvm
I1216 07:47:25.827022    4269 main.go:226] 		FileAccess: exclusive
I1216 07:47:25.827037    4269 main.go:228] 		Overlay: Root=false, SubMounts=false, FilestoreDir=""
I1216 07:47:25.827048    4269 main.go:229] 		Network: sandbox, logging: false
I1216 07:47:25.827059    4269 main.go:230] 		Strace: false, max size: 1024, syscalls: 
I1216 07:47:25.827070    4269 main.go:231] 		LISAFS: true
I1216 07:47:25.827080    4269 main.go:232] 		Debug: true
I1216 07:47:25.827091    4269 main.go:233] 		Systemd: false
I1216 07:47:25.827104    4269 main.go:234] ***************************
W1216 07:47:25.827641    4269 specutils.go:113] noNewPrivileges ignored. PR_SET_NO_NEW_PRIVS is assumed to always be set.
D1216 07:47:25.827701    4269 specutils.go:75] Spec:
{
  "ociVersion": "1.0.0",
  "process": {
    "user": {
      "uid": 0,
      "gid": 0
    },
    "args": [
      "sh"
    ],
    "env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "TERM=xterm"
    ],
    "cwd": "/",
    "rlimits": [
      {
        "type": "RLIMIT_NOFILE",
        "hard": 1024,
        "soft": 1024
      }
    ]
  },
  "root": {
    "path": "/",
    "readonly": true
  },
  "hostname": "runsc",
  "mounts": [
    {
      "destination": "/proc",
      "type": "proc",
      "source": "/home/test/runsc/proc"
    },
    {
      "destination": "/dev",
      "type": "tmpfs",
      "source": "/home/test/runsc/tmpfs"
    },
    {
      "destination": "/sys",
      "type": "sysfs",
      "source": "/home/test/runsc/sysfs",
      "options": [
        "nosuid",
        "noexec",
        "nodev",
        "ro"
      ]
    }
  ],
  "linux": {
    "namespaces": [
      {
        "type": "pid"
      },
      {
        "type": "network"
      },
      {
        "type": "ipc"
      },
      {
        "type": "uts"
      },
      {
        "type": "mount"
      }
    ]
  }
}
D1216 07:47:25.827712    4269 container.go:490] Run container, cid: test, rootDir: "/var/run/runsc"
D1216 07:47:25.827721    4269 container.go:180] Create container, cid: test, rootDir: "/var/run/runsc"
D1216 07:47:25.827779    4269 container.go:238] Creating new sandbox for container, cid: test
D1216 07:47:25.827797    4269 cgroup.go:410] New cgroup for pid: self, *cgroup.cgroupV1: &{Name:/test Parents:map[] Own:map[]}
D1216 07:47:25.827820    4269 cgroup.go:483] Installing cgroup path "/test"
D1216 07:47:25.827831    4269 cgroup.go:501] Using pre-created cgroup "memory": "/sys/fs/cgroup/memory/test"
D1216 07:47:25.827844    4269 cgroup.go:501] Using pre-created cgroup "rdma": "/sys/fs/cgroup/rdma/test"
D1216 07:47:25.827851    4269 cgroup.go:501] Using pre-created cgroup "systemd": "/sys/fs/cgroup/systemd/test"
D1216 07:47:25.827859    4269 cgroup.go:501] Using pre-created cgroup "cpuacct": "/sys/fs/cgroup/cpuacct/test"
D1216 07:47:25.827865    4269 cgroup.go:501] Using pre-created cgroup "freezer": "/sys/fs/cgroup/freezer/test"
D1216 07:47:25.827872    4269 cgroup.go:501] Using pre-created cgroup "perf_event": "/sys/fs/cgroup/perf_event/test"
D1216 07:47:25.827879    4269 cgroup.go:501] Using pre-created cgroup "devices": "/sys/fs/cgroup/devices/test"
D1216 07:47:25.827916    4269 cgroup.go:501] Using pre-created cgroup "blkio": "/sys/fs/cgroup/blkio/test"
D1216 07:47:25.827928    4269 cgroup.go:501] Using pre-created cgroup "cpu": "/sys/fs/cgroup/cpu/test"
D1216 07:47:25.827935    4269 cgroup.go:501] Using pre-created cgroup "hugetlb": "/sys/fs/cgroup/hugetlb/test"
D1216 07:47:25.827950    4269 cgroup.go:501] Using pre-created cgroup "pids": "/sys/fs/cgroup/pids/test"
D1216 07:47:25.827978    4269 cgroup.go:501] Using pre-created cgroup "cpuset": "/sys/fs/cgroup/cpuset/test"
D1216 07:47:25.827993    4269 cgroup.go:501] Using pre-created cgroup "net_cls": "/sys/fs/cgroup/net_cls/test"
D1216 07:47:25.828014    4269 cgroup.go:501] Using pre-created cgroup "net_prio": "/sys/fs/cgroup/net_prio/test"
D1216 07:47:25.828197    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/hugetlb/test"
D1216 07:47:25.828205    4269 cgroup.go:116] Setting "/sys/fs/cgroup/hugetlb/test/cgroup.procs" to "0"
D1216 07:47:25.828265    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/pids/test"
D1216 07:47:25.828274    4269 cgroup.go:116] Setting "/sys/fs/cgroup/pids/test/cgroup.procs" to "0"
D1216 07:47:25.828301    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/devices/test"
D1216 07:47:25.828306    4269 cgroup.go:116] Setting "/sys/fs/cgroup/devices/test/cgroup.procs" to "0"
D1216 07:47:25.828324    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/blkio/test"
D1216 07:47:25.828329    4269 cgroup.go:116] Setting "/sys/fs/cgroup/blkio/test/cgroup.procs" to "0"
D1216 07:47:25.828349    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/cpu/test"
D1216 07:47:25.828354    4269 cgroup.go:116] Setting "/sys/fs/cgroup/cpu/test/cgroup.procs" to "0"
D1216 07:47:25.828380    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/net_prio/test"
D1216 07:47:25.828386    4269 cgroup.go:116] Setting "/sys/fs/cgroup/net_prio/test/cgroup.procs" to "0"
D1216 07:47:25.828409    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/cpuset/test"
D1216 07:47:25.828419    4269 cgroup.go:116] Setting "/sys/fs/cgroup/cpuset/test/cgroup.procs" to "0"
D1216 07:47:25.848351    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/net_cls/test"
D1216 07:47:25.848460    4269 cgroup.go:116] Setting "/sys/fs/cgroup/net_cls/test/cgroup.procs" to "0"
D1216 07:47:25.848520    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/systemd/test"
D1216 07:47:25.848526    4269 cgroup.go:116] Setting "/sys/fs/cgroup/systemd/test/cgroup.procs" to "0"
D1216 07:47:25.848549    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/memory/test"
D1216 07:47:25.848554    4269 cgroup.go:116] Setting "/sys/fs/cgroup/memory/test/cgroup.procs" to "0"
D1216 07:47:25.848576    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/rdma/test"
D1216 07:47:25.848581    4269 cgroup.go:116] Setting "/sys/fs/cgroup/rdma/test/cgroup.procs" to "0"
D1216 07:47:25.848600    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/perf_event/test"
D1216 07:47:25.848605    4269 cgroup.go:116] Setting "/sys/fs/cgroup/perf_event/test/cgroup.procs" to "0"
D1216 07:47:25.848796    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/cpuacct/test"
D1216 07:47:25.848803    4269 cgroup.go:116] Setting "/sys/fs/cgroup/cpuacct/test/cgroup.procs" to "0"
D1216 07:47:25.848821    4269 cgroup.go:616] Joining cgroup "/sys/fs/cgroup/freezer/test"
D1216 07:47:25.848826    4269 cgroup.go:116] Setting "/sys/fs/cgroup/freezer/test/cgroup.procs" to "0"
D1216 07:47:25.848982    4269 donation.go:31] Donating FD 3: "./log/runsc.log.20221216-074725.848846.gofer.txt"
D1216 07:47:25.848992    4269 donation.go:31] Donating FD 4: "/home/test/runsc/config.json"
D1216 07:47:25.848997    4269 donation.go:31] Donating FD 5: "|1"
D1216 07:47:25.849000    4269 donation.go:31] Donating FD 6: "gofer IO FD"
D1216 07:47:25.849003    4269 container.go:1037] Starting gofer: /proc/self/exe [runsc-gofer --root=/var/run/runsc --debug=true --debug-log=./log/ --platform=kvm --debug-log-fd=3 gofer --bundle /home/test/runsc --spec-fd=4 --mounts-fd=5 --io-fds=6]
I1216 07:47:25.852002    4269 container.go:1078] Gofer started, PID: 4278
I1216 07:47:25.852244    4269 sandbox.go:624] Creating sandbox process with addr: runsc-sandbox.test
I1216 07:47:25.852309    4269 sandbox.go:662] Sandbox will be started in new mount, IPC and UTS namespaces
I1216 07:47:25.852321    4269 sandbox.go:674] Sandbox will be started in a new PID namespace
I1216 07:47:25.852339    4269 sandbox.go:683] Sandbox will be started in the container's network namespace: {Type:network Path:}
I1216 07:47:25.852437    4269 sandbox.go:732] Sandbox will be started in new user namespace
D1216 07:47:25.852451    4269 sandbox.go:1377] Changing "/dev/stdin" ownership to 65534/65534
D1216 07:47:25.852463    4269 sandbox.go:1377] Changing "/dev/stdout" ownership to 65534/65534
D1216 07:47:25.852468    4269 sandbox.go:1377] Changing "/dev/stderr" ownership to 65534/65534
D1216 07:47:25.852540    4269 donation.go:31] Donating FD 3: "./log/runsc.log.20221216-074725.852056.boot.txt"
D1216 07:47:25.852547    4269 donation.go:31] Donating FD 4: "sandbox IO FD"
D1216 07:47:25.852551    4269 donation.go:31] Donating FD 5: "|0"
D1216 07:47:25.852557    4269 donation.go:31] Donating FD 6: "|1"
D1216 07:47:25.852562    4269 donation.go:31] Donating FD 7: "control_server_socket"
D1216 07:47:25.852568    4269 donation.go:31] Donating FD 8: "/home/test/runsc/config.json"
D1216 07:47:25.852571    4269 donation.go:31] Donating FD 9: "/dev/kvm"
D1216 07:47:25.852575    4269 donation.go:31] Donating FD 10: "/dev/stdin"
D1216 07:47:25.852580    4269 donation.go:31] Donating FD 11: "/dev/stdout"
D1216 07:47:25.852583    4269 donation.go:31] Donating FD 12: "/dev/stderr"
D1216 07:47:25.852587    4269 sandbox.go:893] Starting sandbox: /proc/self/exe [runsc-sandbox --root=/var/run/runsc --debug=true --debug-log=./log/ --platform=kvm --debug-log-fd=3 boot --bundle=/home/test/runsc --pidns=true --setup-root --cpu-num 4 --total-memory 4090556416 --attached --io-fds=4 --overlay-filestore-fd=-1 --mounts-fd=5 --start-sync-fd=6 --controller-fd=7 --spec-fd=8 --device-fd=9 --stdio-fds=10 --stdio-fds=11 --stdio-fds=12 test]
D1216 07:47:25.852610    4269 sandbox.go:894] SysProcAttr: &{Chroot: Credential:0xc0000c4120 Ptrace:false Setsid:true Setpgid:false Setctty:false Noctty:false Ctty:0 Foreground:false Pgid:0 Pdeathsig:killed Cloneflags:0 Unshareflags:0 UidMappings:[{ContainerID:65534 HostID:65534 Size:1}] GidMappings:[{ContainerID:65534 HostID:65534 Size:1}] GidMappingsEnableSetgroups:false AmbientCaps:[21 18]}
I1216 07:47:25.855227    4269 sandbox.go:917] Sandbox started, PID: 4283
D1216 07:47:26.396547    4269 sandbox.go:1005] Destroy sandbox "test"
D1216 07:47:26.396656    4269 sandbox.go:1008] Killing sandbox "test"
D1216 07:47:26.396770    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/cpu/user.slice"
D1216 07:47:26.396810    4269 cgroup.go:116] Setting "/sys/fs/cgroup/cpu/user.slice/cgroup.procs" to "0"
D1216 07:47:26.397026    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/hugetlb"
D1216 07:47:26.397058    4269 cgroup.go:116] Setting "/sys/fs/cgroup/hugetlb/cgroup.procs" to "0"
D1216 07:47:26.397197    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/blkio/user.slice"
D1216 07:47:26.397226    4269 cgroup.go:116] Setting "/sys/fs/cgroup/blkio/user.slice/cgroup.procs" to "0"
D1216 07:47:26.397406    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/perf_event"
D1216 07:47:26.397434    4269 cgroup.go:116] Setting "/sys/fs/cgroup/perf_event/cgroup.procs" to "0"
D1216 07:47:26.397823    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/cpuset"
D1216 07:47:26.397851    4269 cgroup.go:116] Setting "/sys/fs/cgroup/cpuset/cgroup.procs" to "0"
D1216 07:47:26.412425    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/devices/user.slice"
D1216 07:47:26.412618    4269 cgroup.go:116] Setting "/sys/fs/cgroup/devices/user.slice/cgroup.procs" to "0"
D1216 07:47:26.412812    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/systemd/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service"
D1216 07:47:26.412866    4269 cgroup.go:116] Setting "/sys/fs/cgroup/systemd/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/cgroup.procs" to "0"
D1216 07:47:26.413012    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/rdma"
D1216 07:47:26.413040    4269 cgroup.go:116] Setting "/sys/fs/cgroup/rdma/cgroup.procs" to "0"
D1216 07:47:26.413102    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/net_cls"
D1216 07:47:26.413130    4269 cgroup.go:116] Setting "/sys/fs/cgroup/net_cls/cgroup.procs" to "0"
D1216 07:47:26.413245    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/pids/user.slice/user-1000.slice/[email protected]"
D1216 07:47:26.413273    4269 cgroup.go:116] Setting "/sys/fs/cgroup/pids/user.slice/user-1000.slice/[email protected]/cgroup.procs" to "0"
D1216 07:47:26.413370    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/cpuacct/user.slice"
D1216 07:47:26.413394    4269 cgroup.go:116] Setting "/sys/fs/cgroup/cpuacct/user.slice/cgroup.procs" to "0"
D1216 07:47:26.413443    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/net_prio"
D1216 07:47:26.413456    4269 cgroup.go:116] Setting "/sys/fs/cgroup/net_prio/cgroup.procs" to "0"
D1216 07:47:26.413532    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/memory/user.slice"
D1216 07:47:26.413551    4269 cgroup.go:116] Setting "/sys/fs/cgroup/memory/user.slice/cgroup.procs" to "0"
D1216 07:47:26.413650    4269 cgroup.go:602] Restoring cgroup "/sys/fs/cgroup/freezer"
D1216 07:47:26.413670    4269 cgroup.go:116] Setting "/sys/fs/cgroup/freezer/cgroup.procs" to "0"
D1216 07:47:26.413741    4269 container.go:711] Destroy container, cid: test
D1216 07:47:26.413772    4269 container.go:836] Killing gofer for container, cid: test, PID: 4278
W1216 07:47:26.413865    4269 util.go:64] FATAL ERROR: running container: creating container: cannot create sandbox: cannot read client sync file: waiting for sandbox to start: EOF
W1216 07:47:26.414128    4269 main.go:276] Failure to execute command, err: 1

terenceli avatar Dec 16 '22 15:12 terenceli

A friendly reminder that this issue had no activity for 120 days.

github-actions[bot] avatar Sep 13 '23 00:09 github-actions[bot]

While I'm not sure why the KVM platform wouldn't work with nested virtualization in VMWare, I'd encourage you to run gVisor with the Systrap platform for this setup. You will likely get better performance, and Systrap does not depend on KVM being available.

EtiennePerot avatar Sep 13 '23 23:09 EtiennePerot

A friendly reminder that this issue had no activity for 120 days.

github-actions[bot] avatar Jan 12 '24 00:01 github-actions[bot]

This issue has been closed due to lack of activity.

github-actions[bot] avatar Apr 11 '24 00:04 github-actions[bot]