lkml.org 
[lkml]   [2011]   [Feb]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC][PATCH 0/6] more detailed per-process transparent hugepage statistics
From
Date
On Tue, 2011-02-01 at 21:39 +0100, Andrea Arcangeli wrote:
> So now the speedup
> from hugepages needs to also offset the cost of the more frequent
> split/collapse events that didn't happen before.

My concern here is the downward slope. I interpret that as saying that
we'll eventually have _zero_ THPs. Plus, the benefits are decreasing
constantly, even though the scanning overhead is fixed (or increasing
even).

> So I guess considering the time is of the order of 2/3 hours and there
> are "only" 88G of memory, speeding up khugepaged is going to be
> beneficial considering how big boost hugepages gives to the guest with
> NPT/EPT and even bigger boost for regular shadow paging, but it also
> depends on guest. In short khugepaged by default is tuned in a way
> that can't run in the way of the CPU.

I guess we could also try and figure out whether the khugepaged CPU
overhead really comes from the scanning or the collapsing operations
themselves. Should be as easy as some oprofiling.

If it really is the scanning, I bet we could be a lot more efficient
with khugepaged as well. In the case of KVM guests, we're going to have
awfully fixed virtual addresses and processes where collapsing can take
place.

It might make sense to just have split_huge_page() stick the vaddr and
the mm in a queue. khugepaged could scan those addresses first instead
of just going after the system as a whole.

For cases where the page got split, but wasn't modified, should we have
a non-copying, non-allocating fastpath to re-merge it?

-- Dave



\
 
 \ /
  Last update: 2011-02-01 21:59    [W:0.093 / U:0.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site