lkml.org 
[lkml]   [2023]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH v8 07/16] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory
    From
    On 1/10/23 04:09, Huang, Kai wrote:
    > On Mon, 2023-01-09 at 08:51 -0800, Dave Hansen wrote:
    >> On 1/9/23 03:48, Huang, Kai wrote:
    >>>>>>> This can also be enhanced in the future, i.e. by allowing adding non-TDX
    >>>>>>> memory to a separate NUMA node. In this case, the "TDX-capable" nodes
    >>>>>>> and the "non-TDX-capable" nodes can co-exist, but the kernel/userspace
    >>>>>>> needs to guarantee memory pages for TDX guests are always allocated from
    >>>>>>> the "TDX-capable" nodes.
    >>>>>
    >>>>> Why does it need to be enhanced? What's the problem?
    >>>
    >>> The problem is after TDX module initialization, no more memory can be hot-added
    >>> to the page allocator.
    >>>
    >>> Kirill suggested this may not be ideal. With the existing NUMA ABIs we can
    >>> actually have both TDX-capable and non-TDX-capable NUMA nodes online. We can
    >>> bind TDX workloads to TDX-capable nodes while other non-TDX workloads can
    >>> utilize all memory.
    >>>
    >>> But probably it is not necessarily to call out in the changelog?
    >>
    >> Let's say that we add this TDX-compatible-node ABI in the future. What
    >> will old code do that doesn't know about this ABI?
    >
    > Right. The old app will break w/o knowing the new ABI. One resolution, I
    > think, is we don't introduce new userspace ABI, but hide "TDX-capable" and "non-
    > TDX-capable" nodes in the kernel, and let kernel to enforce always allocating
    > TDX guest memory from those "TDX-capable" nodes.

    That doesn't actually hide all of the behavior from users. Let's say
    they do:

    numactl --membind=6 qemu-kvm ...

    In other words, take all of this guest's memory and put it on node 6.
    There lots of free memory on node 6 which is TDX-*IN*compatible. Then,
    they make it a TDX guest:

    numactl --membind=6 qemu-kvm -tdx ...

    What happens? Does the kernel silently ignore the --membind=6? Or does
    it return -ENOMEM somewhere and confuse the user who has *LOTS* of free
    memory on node 6.

    In other words, I don't think the kernel can just enforce this
    internally and hide it from userspace.

    >> Is there something fundamental that keeps a memory area that spans two
    >> nodes from being removed and then a new area added that is comprised of
    >> a single node?
    >> Boot time:
    >>
    >> | memblock | memblock |
    >> <--Node=0--> <--Node=1-->
    >>
    >> Funky hotplug... nothing to see here, then:
    >>
    >> <--------Node=2-------->
    >
    > I must have missed something, but how can this happen?
    >
    > I had memory that this cannot happen because the BIOS always allocates address
    > ranges for all NUMA nodes during machine boot. Those address ranges don't
    > necessarily need to have DIMM fully populated but they don't change during
    > machine's runtime.

    Is your memory correct? Is there evidence, or requirements in any
    specification to support your memory?

    \
     
     \ /
      Last update: 2023-03-26 23:35    [W:2.365 / U:0.264 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site