archinstall icon indicating copy to clipboard operation
archinstall copied to clipboard

Archinstall doesn't seem to detect Intel VMD

Open MatthewABrantley opened this issue 4 years ago • 5 comments

I'm not certain if the installer can be made to check for Intel VMD and include the module in mkinitcpio, but it definitely made for a confusing installation experience when I installed Arch, rebooted, and then it couldn't find my laptop's NVME despite having just installed Arch onto a partition of said NVME.

MatthewABrantley avatar Oct 20 '21 16:10 MatthewABrantley

Heh, I think I've seen this in the wild as well. Relevant upstream issue: https://bugs.archlinux.org/task/68704 Thread on solving it: https://forum.garudalinux.org/t/trying-to-boot-with-intel-vmd-enabled/4774/3

Excerpt here:

Add the module vmd to the MODULES list, then run sudo mkinitcpio -P to regenerate all initramfs.

  • add nvme_load=YES Edit /etc/default/grub and add nvme_load=YES to GRUB_CMDLINE_LINUX line, then regenerate the GRUB menu with sudo grub-mkconfig -o /boot/grub/grub.cfg.

I think we can absolutely support this. I might have a laptop around with this support, but I'm not 100% sure. I'd need to check what lsblk --json -l -n -o path,size,type,mountpoint,label,pkname,model says the type is on the disk.

That way we can improve and catch it some where along the lines of: https://github.com/archlinux/archinstall/blob/72849083e611486d4a3d141b30c7ad7f2b986cec/archinstall/lib/disk.py#L322-L327 (The above code is being re-worked to actually support raids etc, just an example of what and where we should store the information)

Torxed avatar Oct 26 '21 14:10 Torxed

I can dump mine no problem so you don't have to find your machine, VMD is enabled in my BIOS (if it matters for this output).

{
   "blockdevices": [
      {
         "path": "/dev/nvme0n1",
         "size": "476.9G",
         "type": "disk",
         "mountpoint": null,
         "label": null,
         "pkname": null,
         "model": "SAMSUNG MZALQ512HALU-000L2"
      },{
         "path": "/dev/nvme0n1p1",
         "size": "260M",
         "type": "part",
         "mountpoint": "/boot",
         "label": "SYSTEM_DRV",
         "pkname": "nvme0n1",
         "model": null
      },{
         "path": "/dev/nvme0n1p2",
         "size": "16M",
         "type": "part",
         "mountpoint": null,
         "label": null,
         "pkname": "nvme0n1",
         "model": null
      },{
         "path": "/dev/nvme0n1p3",
         "size": "270.6G",
         "type": "part",
         "mountpoint": null,
         "label": null,
         "pkname": "nvme0n1",
         "model": null
      },{
         "path": "/dev/nvme0n1p4",
         "size": "205.1G",
         "type": "part",
         "mountpoint": "/",
         "label": null,
         "pkname": "nvme0n1",
         "model": null
      },{
         "path": "/dev/nvme0n1p5",
         "size": "1000M",
         "type": "part",
         "mountpoint": null,
         "label": "WINRE_DRV",
         "pkname": "nvme0n1",
         "model": null
      }
   ]
}

MatthewABrantley avatar Oct 26 '21 18:10 MatthewABrantley

Thanks! That helps a lot! Unfortunately I had hoped that the type would show up as something other than disk since it behaves slightly different, just like a raid would. I guess the kernel don't differentiate it enough.

Does lsblk --json -l -n /dev/nvme0n1 -o PKNAME,HCTL,TRAN,SUBSYSTEMS report vmd in the subsystems field? If it doesn't, can you check if udevadm info -a -n /dev/nvme0n1 | egrep 'looking|DRIVER' does? And if that also doesn't, I have to come up with something else. Because I'm not sure hwinfo is or will be on the ISO.

I'm trying to find some reliable way to detect what kind of driver or subsystem is used when the tech is there.

Torxed avatar Oct 26 '21 19:10 Torxed

First command just shows this on all output:

"subsystems": "block:nvme:pci"

Second commands yield seems a bit more promising in actually emitting the letters VMD?:

  looking at device '/devices/pci0000:00/0000:00:0e.0/pci10000:e0/10000:e0:1d.0/10000:e1:00.0/nvme/nvme0/nvme0n1':
    DRIVER==""
  looking at parent device '/devices/pci0000:00/0000:00:0e.0/pci10000:e0/10000:e0:1d.0/10000:e1:00.0/nvme/nvme0':
    DRIVERS==""
  looking at parent device '/devices/pci0000:00/0000:00:0e.0/pci10000:e0/10000:e0:1d.0/10000:e1:00.0':
    DRIVERS=="nvme"
  looking at parent device '/devices/pci0000:00/0000:00:0e.0/pci10000:e0/10000:e0:1d.0':
    DRIVERS=="pcieport"
  looking at parent device '/devices/pci0000:00/0000:00:0e.0/pci10000:e0':
    DRIVERS==""
  looking at parent device '/devices/pci0000:00/0000:00:0e.0':
    DRIVERS=="vmd"
  looking at parent device '/devices/pci0000:00':
    DRIVERS==""

MatthewABrantley avatar Oct 26 '21 19:10 MatthewABrantley

I can write a parser for that, and build up a driver list and do "vmd" in BlockDevice.drivers :) Thanks, I think I'll go with the udevadm route. I don't see udev going anywhere soon unless we're talking BusyBox.

Torxed avatar Oct 26 '21 19:10 Torxed