lkml.org 
[lkml]   [2019]   [May]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [RFC KVM 00/27] KVM Address Space Isolation
    From
    Date

    Thanks all for your replies and comments. I am trying to summarize main
    feedback below, and define next steps.

    But first, let me clarify what should happen when exiting the KVM isolated
    address space (i.e. when we need to access to the full kernel). There was
    some confusion because this was not clearly described in the cover letter.
    Thanks to Liran for this better explanation:

    When a hyperthread needs to switch from KVM isolated address space to
    kernel full address space, it should first kick all sibling hyperthreads
    outside of guest and only then safety switch to full kernel address
    space. Only once all sibling hyperthreads are running with KVM isolated
    address space, it is safe to enter guest.

    The main point of this address space is to avoid kicking all sibling
    hyperthreads on *every* VMExit from guest but instead only kick them when
    switching address space. The assumption is that the vast majority of exits
    can be handled in KVM isolated address space and therefore do not require
    to kick the sibling hyperthreads outside of guest.

    “kick” in this context means sending an IPI to all sibling hyperthreads.
    This IPI will cause these sibling hyperthreads to exit from guest to host
    on EXTERNAL_INTERRUPT and wait for a condition that again allows to enter
    back into guest. This condition will be once all hyperthreads of CPU core
    is again running only within KVM isolated address space of this VM.


    Feedback
    ========

    Page-table Management

    - Need to cleanup terminology mm vs page-table. It looks like we just need
    a KVM page-table, not a KVM mm.

    - Interfaces for creating and managing page-table should be provided by
    the kernel, and not implemented in KVM. KVM shouldn't access kernel
    low-level memory management functions.

    KVM Isolation Enter/Exit

    - Changing CR3 in #PF could be a natural extension as #PF can already
    change page-tables, but we need a very coherent design and strong
    rules.

    - Reduce kernel code running without the whole kernel mapping to the
    minimum.

    - Avoid using current and task_struct while running with KVM page table.

    - Ensure KVM page-table is not used with vmalloc.

    - Try to avoid copying parts of the vmalloc page tables. This
    interacts unpleasantly with having the kernel stack. We can freely
    use a different stack (the IRQ stack, for example) as long as
    we don't schedule, but that means we can't run preemptable code.

    - Potential issues with tracing, kprobes... A solution would be to
    compile the isolated code with tracing off.

    - Better centralize KVM isolation exit on IRQ, NMI, MCE, faults...
    Switch back to full kernel before switching to IRQ stack or
    shorlty after.

    - Can we disable IRQ while running with KVM page-table?

    For IRQs it's somewhat feasible, but not for NMIs since NMIs are
    unblocked on VMX immediately after VM-Exit

    Exits due to INTR, NMI and #MC are considered high priority and are
    serviced before re-enabling IRQs and preemption[1]. All other exits
    are handled after IRQs and preemption are re-enabled.

    A decent number of exit handlers are quite short, but many exit
    handlers require significantly longer flows. In short, leaving
    IRQs disabled across all exits is not practical.

    It makes sense to pinpoint exactly what exits are:
    a) in the hot path for the use case (configuration)
    b) can be handled fast enough that they can run with IRQs disabled.

    Generating that list might allow us to tightly bound the contents
    of kvm_mm and sidestep many of the corner cases, i.e. select VM-Exits
    are handle with IRQs disabled using KVM's mm, while "slow" VM-Exits
    go through the full context switch.


    KVM Page Table Content

    - Check and reduce core mappings (kernel text size, cpu_entry_area,
    espfix64, IRQ stack...)

    - Check and reduce percpu mapping, percpu memory can contain secrets (e.g.
    percpu random pool)


    Next Steps
    ==========

    I will investigate Sean's suggestion to see which VM-Exits can be handled
    fast enough so that they can run with IRQs disabled (fast VM-Exits),
    and which slow VM-Exits are in the hot path.

    So I will work on a new POC which just handles fast VM-Exits with IRQs
    disabled. This should largely reduce mappings required in the KVM page
    table. I will also try to just have a KVM page-table and not a KVM mm.

    After this new POC, we should be able to evaluate the need for handling
    slow VM-Exits. And if there's an actual need, we can investigate how
    to handle them with IRQs enabled.


    Thanks,

    alex.

    \
     
     \ /
      Last update: 2019-05-15 14:57    [W:4.145 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site