lkml.org 
[lkml]   [2019]   [Jul]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 04/21] x86/sgx: Add /dev/sgx/virt_epc device to allocate "raw" EPC for VMs
On Sat, Jul 27, 2019 at 10:44:24AM -0700, Andy Lutomirski wrote:
> On Fri, Jul 26, 2019 at 10:52 PM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > Add an SGX device to enable userspace to allocate EPC without an
> > associated enclave. The intended and only known use case for direct EPC
> > allocation is to expose EPC to a KVM guest, hence the virt_epc moniker,
> > virt.{c,h} files and INTEL_SGX_VIRTUALIZATION Kconfig.
> >
> > Although KVM is the end consumer of EPC, and will need hooks into the
> > virtual EPC management if oversubscription of EPC for guest is ever
> > supported (see below), implement direct access to EPC in the SGX
> > subsystem instead of in KVM. Doing so has two major advantages:
> >
> > - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
> > just another memory backend for guests.
>
> This is general grumbling more than useful feedback, but I wish there
> was a way for KVM's userspace to add a memory region that is *not*
> backed by a memory mapping. For SGX, this would avoid the slightly
> awkward situation where useless EPC pages are mapped by QEMU. For
> SEV, it would solve the really fairly awful situation where the SEV
> pages are mapped *incoherently* for QEMU. And even in the absence of
> fancy hardware features, it would allow the guest to have secrets in
> memory that are not exposed to wild reads, speculation attacks, etc
> coming from QEMU.
>
> I realize the implementation would be extremely intrusive, but it just
> might make it a lot easier to do things like making SEV pages property
> movable. Similarly, I could see EPC oversubscription being less nasty
> in this model. For one thing, it would make it more straightforward
> to keep track of exactly which VMs have a given EPC page mapped,
> whereas right now this driver only really knows which host userspace
> mm has the EPC page mapped.

Host userspace vs VM doesn't add much, if any, complexity to EPC
oversubscription, especially since the problem of supporting multiple mm
structs needs to be solved for the native driver anyways.

The nastiness of oversubscribing a VM is primarily in dealing with
EBLOCK/ETRACK/EWB conflicts between guest and host. The other nasty bit
is that it all but requires fancier VA page management, e.g. allocating a
dedicated VA slot for every EPC page doesn't scale when presenting a
multi-{GB,TB} EPC to a guest, especially since there's no guarantee the
guest will ever access EPC.

\
 
 \ /
  Last update: 2019-07-29 19:07    [W:0.073 / U:0.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site