lkml.org 
[lkml]   [2023]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v10 16/16] Documentation/x86: Add documentation for TDX host support
    Date
    Add documentation for TDX host kernel support.  There is already one
    file Documentation/x86/tdx.rst containing documentation for TDX guest
    internals. Also reuse it for TDX host kernel support.

    Introduce a new level menu "TDX Guest Support" and move existing
    materials under it, and add a new menu for TDX host kernel support.

    Signed-off-by: Kai Huang <kai.huang@intel.com>
    ---
    Documentation/x86/tdx.rst | 186 +++++++++++++++++++++++++++++++++++---
    1 file changed, 175 insertions(+), 11 deletions(-)

    diff --git a/Documentation/x86/tdx.rst b/Documentation/x86/tdx.rst
    index dc8d9fd2c3f7..a6f66a28bef4 100644
    --- a/Documentation/x86/tdx.rst
    +++ b/Documentation/x86/tdx.rst
    @@ -10,6 +10,170 @@ encrypting the guest memory. In TDX, a special module running in a special
    mode sits between the host and the guest and manages the guest/host
    separation.

    +TDX Host Kernel Support
    +=======================
    +
    +TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and
    +a new isolated range pointed by the SEAM Ranger Register (SEAMRR). A
    +CPU-attested software module called 'the TDX module' runs inside the new
    +isolated range to provide the functionalities to manage and run protected
    +VMs.
    +
    +TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
    +provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs
    +as TDX private KeyIDs, which are only accessible within the SEAM mode.
    +BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.
    +
    +Before the TDX module can be used to create and run protected VMs, it
    +must be loaded into the isolated range and properly initialized. The TDX
    +architecture doesn't require the BIOS to load the TDX module, but the
    +kernel assumes it is loaded by the BIOS.
    +
    +TDX boot-time detection
    +-----------------------
    +
    +The kernel detects TDX by detecting TDX private KeyIDs during kernel
    +boot. Below dmesg shows when TDX is enabled by BIOS::
    +
    + [..] tdx: BIOS enabled: private KeyID range: [16, 64).
    +
    +TDX module detection and initialization
    +---------------------------------------
    +
    +There is no CPUID or MSR to detect the TDX module. The kernel detects it
    +by initializing it.
    +
    +The kernel talks to the TDX module via the new SEAMCALL instruction. The
    +TDX module implements SEAMCALL leaf functions to allow the kernel to
    +initialize it.
    +
    +Initializing the TDX module consumes roughly ~1/256th system RAM size to
    +use it as 'metadata' for the TDX memory. It also takes additional CPU
    +time to initialize those metadata along with the TDX module itself. Both
    +are not trivial. The kernel initializes the TDX module at runtime on
    +demand.
    +
    +Besides initializing the TDX module, a per-cpu initialization SEAMCALL
    +must be done on one cpu before any other SEAMCALLs can be made on that
    +cpu.
    +
    +The kernel provides two functions, tdx_enable() and tdx_cpu_enable() to
    +allow the user of TDX to enable the TDX module and enable TDX on local
    +cpu.
    +
    +Making SEAMCALL requires the CPU already being in VMX operation (VMXON
    +has been done). For now both tdx_enable() and tdx_cpu_enable() don't
    +handle VMXON internally, but depends on the caller to guarantee that.
    +
    +To enable TDX, the user of TDX should: 1) hold read lock of CPU hotplug
    +lock; 2) do VMXON and tdx_enable_cpu() on all online cpus successfully;
    +3) call tdx_enable(). For example::
    +
    + cpus_read_lock();
    + on_each_cpu(vmxon_and_tdx_cpu_enable());
    + ret = tdx_enable();
    + cpus_read_unlock();
    + if (ret)
    + goto no_tdx;
    + // TDX is ready to use
    +
    +And the user of TDX must be guarantee tdx_cpu_enable() has beene
    +successfully done on any cpu before it wants to run any other SEAMCALL.
    +A typical usage is do both VMXON and tdx_cpu_enable() in CPU hotplug
    +online callback, and refuse to online if tdx_cpu_enable() fails.
    +
    +User can consult dmesg to see the presence of the TDX module, and whether
    +it has been initialized.
    +
    +If the TDX module is not loaded, dmesg shows below::
    +
    + [..] tdx: TDX module is not loaded.
    +
    +If the TDX module is initialized successfully, dmesg shows something
    +like below::
    +
    + [..] tdx: TDX module: attributes 0x0, vendor_id 0x8086, major_version 1, minor_version 0, build_date 20211209, build_num 160
    + [..] tdx: 262668 KBs allocated for PAMT.
    + [..] tdx: TDX module initialized.
    +
    +If the TDX module failed to initialize, dmesg also shows it failed to
    +initialize::
    +
    + [..] tdx: TDX module initialization failed ...
    +
    +TDX Interaction to Other Kernel Components
    +------------------------------------------
    +
    +TDX Memory Policy
    +~~~~~~~~~~~~~~~~~
    +
    +TDX reports a list of "Convertible Memory Region" (CMR) to tell the
    +kernel which memory is TDX compatible. The kernel needs to build a list
    +of memory regions (out of CMRs) as "TDX-usable" memory and pass those
    +regions to the TDX module. Once this is done, those "TDX-usable" memory
    +regions are fixed during module's lifetime.
    +
    +To keep things simple, currently the kernel simply guarantees all pages
    +in the page allocator are TDX memory. Specifically, the kernel uses all
    +system memory in the core-mm at the time of initializing the TDX module
    +as TDX memory, and in the meantime, refuses to online any non-TDX-memory
    +in the memory hotplug.
    +
    +This can be enhanced in the future, i.e. by allowing adding non-TDX
    +memory to a separate NUMA node. In this case, the "TDX-capable" nodes
    +and the "non-TDX-capable" nodes can co-exist, but the kernel/userspace
    +needs to guarantee memory pages for TDX guests are always allocated from
    +the "TDX-capable" nodes.
    +
    +Physical Memory Hotplug
    +~~~~~~~~~~~~~~~~~~~~~~~
    +
    +Note TDX assumes convertible memory is always physically present during
    +machine's runtime. A non-buggy BIOS should never support hot-removal of
    +any convertible memory. This implementation doesn't handle ACPI memory
    +removal but depends on the BIOS to behave correctly.
    +
    +CPU Hotplug
    +~~~~~~~~~~~
    +
    +TDX module requires the per-cpu initialization SEAMCALL (TDH.SYS.LP.INIT)
    +must be done on one cpu before any other SEAMCALLs can be made on that
    +cpu, including those involved during the module initialization.
    +
    +The kernel provides tdx_cpu_enable() to let the user of TDX to do it when
    +the user wants to use a new cpu for TDX task.
    +
    +TDX doesn't support physical (ACPI) CPU hotplug. During machine boot,
    +TDX verifies all boot-time present logical CPUs are TDX compatible before
    +enabling TDX. A non-buggy BIOS should never support hot-add/removal of
    +physical CPU. Currently the kernel doesn't handle physical CPU hotplug,
    +but depends on the BIOS to behave correctly.
    +
    +Note TDX works with CPU logical online/offline, thus the kernel still
    +allows to offline logical CPU and online it again.
    +
    +Kexec()
    +~~~~~~~
    +
    +There are two problems in terms of using kexec() to boot to a new kernel
    +when the old kernel has enabled TDX: 1) Part of the memory pages are
    +still TDX private pages; 2) There might be dirty cachelines associated
    +with TDX private pages.
    +
    +The first problem doesn't matter. KeyID 0 doesn't have integrity check.
    +Even the new kernel wants use any non-zero KeyID, it needs to convert
    +the memory to that KeyID and such conversion would work from any KeyID.
    +
    +However the old kernel needs to guarantee there's no dirty cacheline
    +left behind before booting to the new kernel to avoid silent corruption
    +from later cacheline writeback (Intel hardware doesn't guarantee cache
    +coherency across different KeyIDs).
    +
    +Similar to AMD SME, the kernel just uses wbinvd() to flush cache before
    +booting to the new kernel.
    +
    +TDX Guest Support
    +=================
    Since the host cannot directly access guest registers or memory, much
    normal functionality of a hypervisor must be moved into the guest. This is
    implemented using a Virtualization Exception (#VE) that is handled by the
    @@ -20,7 +184,7 @@ TDX includes new hypercall-like mechanisms for communicating from the
    guest to the hypervisor or the TDX module.

    New TDX Exceptions
    -==================
    +------------------

    TDX guests behave differently from bare-metal and traditional VMX guests.
    In TDX guests, otherwise normal instructions or memory accesses can cause
    @@ -30,7 +194,7 @@ Instructions marked with an '*' conditionally cause exceptions. The
    details for these instructions are discussed below.

    Instruction-based #VE
    ----------------------
    +~~~~~~~~~~~~~~~~~~~~~

    - Port I/O (INS, OUTS, IN, OUT)
    - HLT
    @@ -41,7 +205,7 @@ Instruction-based #VE
    - CPUID*

    Instruction-based #GP
    ----------------------
    +~~~~~~~~~~~~~~~~~~~~~

    - All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH,
    VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON
    @@ -52,7 +216,7 @@ Instruction-based #GP
    - RDMSR*,WRMSR*

    RDMSR/WRMSR Behavior
    ---------------------
    +~~~~~~~~~~~~~~~~~~~~

    MSR access behavior falls into three categories:

    @@ -73,7 +237,7 @@ trapping and handling in the TDX module. Other than possibly being slow,
    these MSRs appear to function just as they would on bare metal.

    CPUID Behavior
    ---------------
    +~~~~~~~~~~~~~~

    For some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID
    return values (in guest EAX/EBX/ECX/EDX) are configurable by the
    @@ -93,7 +257,7 @@ not know how to handle. The guest kernel may ask the hypervisor for the
    value with a hypercall.

    #VE on Memory Accesses
    -======================
    +----------------------

    There are essentially two classes of TDX memory: private and shared.
    Private memory receives full TDX protections. Its content is protected
    @@ -107,7 +271,7 @@ entries. This helps ensure that a guest does not place sensitive
    information in shared memory, exposing it to the untrusted hypervisor.

    #VE on Shared Memory
    ---------------------
    +~~~~~~~~~~~~~~~~~~~~

    Access to shared mappings can cause a #VE. The hypervisor ultimately
    controls whether a shared memory access causes a #VE, so the guest must be
    @@ -127,7 +291,7 @@ be careful not to access device MMIO regions unless it is also prepared to
    handle a #VE.

    #VE on Private Pages
    ---------------------
    +~~~~~~~~~~~~~~~~~~~~

    An access to private mappings can also cause a #VE. Since all kernel
    memory is also private memory, the kernel might theoretically need to
    @@ -145,7 +309,7 @@ The hypervisor is permitted to unilaterally move accepted pages to a
    to handle the exception.

    Linux #VE handler
    -=================
    +-----------------

    Just like page faults or #GP's, #VE exceptions can be either handled or be
    fatal. Typically, an unhandled userspace #VE results in a SIGSEGV.
    @@ -167,7 +331,7 @@ While the block is in place, any #VE is elevated to a double fault (#DF)
    which is not recoverable.

    MMIO handling
    -=============
    +-------------

    In non-TDX VMs, MMIO is usually implemented by giving a guest access to a
    mapping which will cause a VMEXIT on access, and then the hypervisor
    @@ -189,7 +353,7 @@ MMIO access via other means (like structure overlays) may result in an
    oops.

    Shared Memory Conversions
    -=========================
    +-------------------------

    All TDX guest memory starts out as private at boot. This memory can not
    be accessed by the hypervisor. However, some kernel users like device
    --
    2.39.2
    \
     
     \ /
      Last update: 2023-03-27 00:45    [W:6.376 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site