lkml.org 
[lkml]   [2021]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFCv1 7/7] KVM: unmap guest memory using poisoned pages
Date
On 07.04.21 15:16, Kirill A. Shutemov wrote:
> On Tue, Apr 06, 2021 at 04:57:46PM +0200, David Hildenbrand wrote:
>> On 06.04.21 16:33, Dave Hansen wrote:
>>> On 4/6/21 12:44 AM, David Hildenbrand wrote:
>>>> On 02.04.21 17:26, Kirill A. Shutemov wrote:
>>>>> TDX architecture aims to provide resiliency against confidentiality and
>>>>> integrity attacks. Towards this goal, the TDX architecture helps enforce
>>>>> the enabling of memory integrity for all TD-private memory.
>>>>>
>>>>> The CPU memory controller computes the integrity check value (MAC) for
>>>>> the data (cache line) during writes, and it stores the MAC with the
>>>>> memory as meta-data. A 28-bit MAC is stored in the ECC bits.
>>>>>
>>>>> Checking of memory integrity is performed during memory reads. If
>>>>> integrity check fails, CPU poisones cache line.
>>>>>
>>>>> On a subsequent consumption (read) of the poisoned data by software,
>>>>> there are two possible scenarios:
>>>>>
>>>>>   - Core determines that the execution can continue and it treats
>>>>>     poison with exception semantics signaled as a #MCE
>>>>>
>>>>>   - Core determines execution cannot continue,and it does an unbreakable
>>>>>     shutdown
>>>>>
>>>>> For more details, see Chapter 14 of Intel TDX Module EAS[1]
>>>>>
>>>>> As some of integrity check failures may lead to system shutdown host
>>>>> kernel must not allow any writes to TD-private memory. This requirment
>>>>> clashes with KVM design: KVM expects the guest memory to be mapped into
>>>>> host userspace (e.g. QEMU).
>>>>
>>>> So what you are saying is that if QEMU would write to such memory, it
>>>> could crash the kernel? What a broken design.
>>>
>>> IMNHO, the broken design is mapping the memory to userspace in the first
>>> place. Why the heck would you actually expose something with the MMU to
>>> a context that can't possibly meaningfully access or safely write to it?
>>
>> I'd say the broken design is being able to crash the machine via a simple
>> memory write, instead of only crashing a single process in case you're doing
>> something nasty. From the evaluation of the problem it feels like this was a
>> CPU design workaround: instead of properly cleaning up when it gets tricky
>> within the core, just crash the machine. And that's a CPU "feature", not a
>> kernel "feature". Now we have to fix broken HW in the kernel - once again.
>>
>> However, you raise a valid point: it does not make too much sense to to map
>> this into user space. Not arguing against that; but crashing the machine is
>> just plain ugly.
>>
>> I wonder: why do we even *want* a VMA/mmap describing that memory? Sounds
>> like: for hacking support for that memory type into QEMU/KVM.
>>
>> This all feels wrong, but I cannot really tell how it could be better. That
>> memory can really only be used (right now?) with hardware virtualization
>> from some point on. From that point on (right from the start?), there should
>> be no VMA/mmap/page tables for user space anymore.
>>
>> Or am I missing something? Is there still valid user space access?
>
> There is. For IO (e.g. virtio) the guest mark a range of memory as shared
> (or unencrypted for AMD SEV). The range is not pre-defined.
>

Ah right, rings a bell. One obvious alternative would be to let user
space only explicitly map what is shared and can be safely accessed,
instead of doing it the other way around. But that obviously requires
more thought/work and clashes with future MM changes you discuss below.

>>> This started with SEV. QEMU creates normal memory mappings with the SEV
>>> C-bit (encryption) disabled. The kernel plumbs those into NPT, but when
>>> those are instantiated, they have the C-bit set. So, we have mismatched
>>> mappings. Where does that lead? The two mappings not only differ in
>>> the encryption bit, causing one side to read gibberish if the other
>>> writes: they're not even cache coherent.
>>>
>>> That's the situation *TODAY*, even ignoring TDX.
>>>
>>> BTW, I'm pretty sure I know the answer to the "why would you expose this
>>> to userspace" question: it's what QEMU/KVM did alreadhy for
>>> non-encrypted memory, so this was the quickest way to get SEV working.
>>>
>>
>> Yes, I guess so. It was the fastest way to "hack" it into QEMU.
>>
>> Would we ever even want a VMA/mmap/process page tables for that memory? How
>> could user space ever do something *not so nasty* with that memory (in the
>> current context of VMs)?
>
> In the future, the memory should be still managable by host MM: migration,
> swapping, etc. But it's long way there. For now, the guest memory

I was involved in the s390x implementation where this already works,
simply because whenever encrypted memory is read/written from the
hypervisor, you simple read/write the encrypted data; once the VM
accesses that memory, it reads/writes unencrypted memory. For this
reason, migration, swapping etc. works fairly naturally.

I do wonder how x86-64 wants to tackle that; In the far future, will it
be valid to again read/write encrypted memory, especially from user space?

> effectively pinned on the host.

Right, I remember that limitation for SEV.

Thanks!

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2021-04-07 16:10    [W:0.342 / U:0.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site