gunyah-hypervisor
gunyah-hypervisor copied to clipboard
Pageable guest memory support
Secure and fast GPU acceleration in VMs (via virtio-GPU native contexts) requires the ability to handle guest-generated page faults. There is no feasible workaround I know of, and I tried to find one for years because Xen has the same limitation. That will eventually have to be fixed in Xen.
Secure demand paging of (HLOS dependent) VMs is under development, and will be released in a future update. Since you mention GPU, are you instead meaning handling of DMA generated stage-2 MMU page-faults ?
GPU acceleration via virtio-GPU native context requires mapping memory controlled by the Linux GPU drivers into the guest. This memory is accessible by the GPU and the CPU. Furthermore, the Linux GPU drivers need to be able to revoke guest access to the memory at any time for memory management reasons, and they need to be able to access the and alter the guest memory.
In short, I'm interested in non-secure demand paging :).
Non-secure demand paging should be possible with the Hypervisor API as well, however this hasn't been tested as far as I'm aware and might need extension in the host driver and VMMs.
The pages in question may need to be mapped by the guest and host at the same time, only by the guest, only by the host, or by neither. The host needs full control over the guest memory and needs to be able to handle arbitrary guest stage-2 translation faults. Ideally, gunyah would forward any guest exception to the host without modification.
@quic-cvanscha Is this something that should be possible? In particular, can the full KVM API be implemented on top of Gunyah, so that a userspace VMM does not know that Gunyah is being used internally?