lkml.org 
[lkml]   [2015]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 0/6] the big khugepaged redesign
On 02/23/2015 11:46 PM, Davidlohr Bueso wrote:
> On Mon, 2015-02-23 at 13:58 +0100, Vlastimil Babka wrote:
>> Recently, there was concern expressed (e.g. [1]) whether the quite aggressive
>> THP allocation attempts on page faults are a good performance trade-off.
>>
>> - THP allocations add to page fault latency, as high-order allocations are
>> notoriously expensive. Page allocation slowpath now does extra checks for
>> GFP_TRANSHUGE && !PF_KTHREAD to avoid the more expensive synchronous
>> compaction for user page faults. But even async compaction can be expensive.
>> - During the first page fault in a 2MB range we cannot predict how much of the
>> range will be actually accessed - we can theoretically waste as much as 511
>> worth of pages [2]. Or, the pages in the range might be accessed from CPUs
>> from different NUMA nodes and while base pages could be all local, THP could
>> be remote to all but one CPU. The cost of remote accesses due to this false
>> sharing would be higher than any savings on the TLB.
>> - The interaction with memcg are also problematic [1].
>>
>> Now I don't have any hard data to show how big these problems are, and I
>> expect we will discuss this on LSF/MM (and hope somebody has such data [3]).
>> But it's certain that e.g. SAP recommends to disable THPs [4] for their apps
>> for performance reasons.
>
> There are plenty of examples of this, ie for Oracle:
>
> https://blogs.oracle.com/linux/entry/performance_issues_with_transparent_huge
> http://oracle-base.com/articles/linux/configuring-huge-pages-for-oracle-on-linux-64.php

Just stumbled upon more references when catching up on lwn:

http://lwn.net/Articles/634797/



\
 
 \ /
  Last update: 2015-03-09 04:21    [W:0.120 / U:1.456 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site