lkml.org 
[lkml]   [2020]   [May]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC 00/16] KVM protected memory extension
From
Date
On 5/26/20 4:38 AM, Mike Rapoport wrote:
> On Tue, May 26, 2020 at 01:16:14PM +0300, Liran Alon wrote:
>> On 26/05/2020 9:17, Mike Rapoport wrote:
>>> On Mon, May 25, 2020 at 04:47:18PM +0300, Liran Alon wrote:
>>>> On 22/05/2020 15:51, Kirill A. Shutemov wrote:
>>>>
>>> Out of curiosity, do we actually have some numbers for the "non-trivial
>>> performance cost"? For instance for KVM usecase?
>>>
>> Dig into XPFO mailing-list discussions to find out...
>> I just remember that this was one of the main concerns regarding XPFO.
> The XPFO benchmarks measure total XPFO cost, and huge share of it comes
> from TLB shootdowns.

Yes, TLB shootdown when pages transition between owners is huge. The
XPFO folks did a lot of work to try to optimize some of this overhead
away. But, it's still a concern.

The concern with XPFO was that it could affect *all* application page
allocation. This approach cheats a bit and only goes after guest VM
pages. It's significantly more work to allocate a page and map it into
a guest than it is to, for instance, allocate an anonymous user page.
That means that the *additional* overhead of things like this for guest
memory matter a lot less.

> It's not exactly measurement of the imapct of the direct map
> fragmentation to workload running inside a vitrual machine.

While the VM *itself* is running, there is zero overhead. The host
direct map is not used at *all*. The guest and host TLB entries share
the same space in the TLB so there could be some increased pressure on
the TLB, but that's a really secondary effect. It would also only occur
if the guest exits and the host runs and starts evicting TLB entries.

The other effects I could think of would be when the guest exits and the
host is doing some work for the guest, like emulation or something. The
host would see worse TLB behavior because the host is using the
(fragmented) direct map.

But, both of those things require VMEXITs. The more exits, the more
overhead you _might_ observe. What I've been hearing from KVM folks is
that exits are getting more and more rare and the hardware designers are
working hard to minimize them.

That's especially good news because it means that even if the situation
isn't perfect, it's only bound to get *better* over time, not worse.

\
 
 \ /
  Last update: 2020-05-27 17:46    [W:0.147 / U:21.608 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site