lkml.org 
[lkml]   [2022]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v1.1 1/2] x86/sev: Use per-CPU PSC structure in prep for unaccepted memory support
From
On 8/3/22 16:48, Dave Hansen wrote:
> On 8/3/22 14:34, Tom Lendacky wrote:
>>> Also, private<->shared page conversions are *NOT* common from what I can
>>> tell.  There are a few pages converted at boot, but most host the
>>> guest<->host communications are through the swiotlb pages which are
>>> static.
>>
>> Generally, that's true. But, e.g., a dma_alloc_coherent() actually
>> doesn't go through SWIOTLB, but instead allocates the pages and makes
>> them shared, which results in a page state change. The NVMe driver was
>> calling that API a lot. In this case, though, the NVMe driver was
>> running in IRQ context and set_memory_decrypted() could sleep, so an
>> unencrypted DMA memory pool was created to work around the sleeping
>> issue and reduce the page state changes. It's just things like that,
>> that make me wary.
>
> Interesting. Is that a real passthrough NVMe device or the hypervisor
> presenting a virtual one that just happens to use the NVMe driver?

Hmmm... not sure, possibly the latter. I just knew that whatever it was,
the NVMe driver was being used.

Thanks,
Tom

>
> I'm pretty sure the TDX folks have been banking on having very few page
> state changes. But, part of that at least is their expectation of
> relying heavily on virtio.
>
> I wonder if their expectations are accurate, or if once TDX gets out
> into the real world if their hopes will be dashed.

\
 
 \ /
  Last update: 2022-08-04 00:17    [W:0.180 / U:0.600 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site