lkml.org 
[lkml]   [2017]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH -mm] mm: Clear to access sub-page last when clearing huge page
Date
Hi, Andrew,

Andrew Morton <akpm@linux-foundation.org> writes:

> On Mon, 7 Aug 2017 15:21:31 +0800 "Huang, Ying" <ying.huang@intel.com> wrote:
>
>> From: Huang Ying <ying.huang@intel.com>
>>
>> Huge page helps to reduce TLB miss rate, but it has higher cache
>> footprint, sometimes this may cause some issue. For example, when
>> clearing huge page on x86_64 platform, the cache footprint is 2M. But
>> on a Xeon E5 v3 2699 CPU, there are 18 cores, 36 threads, and only 45M
>> LLC (last level cache). That is, in average, there are 2.5M LLC for
>> each core and 1.25M LLC for each thread. If the cache pressure is
>> heavy when clearing the huge page, and we clear the huge page from the
>> begin to the end, it is possible that the begin of huge page is
>> evicted from the cache after we finishing clearing the end of the huge
>> page. And it is possible for the application to access the begin of
>> the huge page after clearing the huge page.
>>
>> To help the above situation, in this patch, when we clear a huge page,
>> the order to clear sub-pages is changed. In quite some situation, we
>> can get the address that the application will access after we clear
>> the huge page, for example, in a page fault handler. Instead of
>> clearing the huge page from begin to end, we will clear the sub-pages
>> farthest from the the sub-page to access firstly, and clear the
>> sub-page to access last. This will make the sub-page to access most
>> cache-hot and sub-pages around it more cache-hot too. If we cannot
>> know the address the application will access, the begin of the huge
>> page is assumed to be the the address the application will access.
>>
>> With this patch, the throughput increases ~28.3% in vm-scalability
>> anon-w-seq test case with 72 processes on a 2 socket Xeon E5 v3 2699
>> system (36 cores, 72 threads). The test case creates 72 processes,
>> each process mmap a big anonymous memory area and writes to it from
>> the begin to the end. For each process, other processes could be seen
>> as other workload which generates heavy cache pressure. At the same
>> time, the cache miss rate reduced from ~33.4% to ~31.7%, the
>> IPC (instruction per cycle) increased from 0.56 to 0.74, and the time
>> spent in user space is reduced ~7.9%
>>
>> Thanks Andi Kleen to propose to use address to access to determine the
>> order of sub-pages to clear.
>>
>> The hugetlbfs access address could be improved, will do that in
>> another patch.
>
> I agree with what others said, plus...
>
>> @@ -4374,9 +4374,31 @@ void clear_huge_page(struct page *page,
>> }
>>
>> might_sleep();
>> - for (i = 0; i < pages_per_huge_page; i++) {
>> + VM_BUG_ON(clamp(addr_hint, addr, addr +
>> + (pages_per_huge_page << PAGE_SHIFT)) != addr_hint);
>> + n = (addr_hint - addr) / PAGE_SIZE;
>> + if (2 * n <= pages_per_huge_page) {
>> + base = 0;
>> + l = n;
>> + for (i = pages_per_huge_page - 1; i >= 2 * n; i--) {
>> + cond_resched();
>> + clear_user_highpage(page + i, addr + i * PAGE_SIZE);
>> + }
>> + } else {
>> + base = 2 * n - pages_per_huge_page;
>> + l = pages_per_huge_page - n;
>> + for (i = 0; i < base; i++) {
>> + cond_resched();
>> + clear_user_highpage(page + i, addr + i * PAGE_SIZE);
>> + }
>> + }
>> + for (i = 0; i < l; i++) {
>> + cond_resched();
>> + clear_user_highpage(page + base + i,
>> + addr + (base + i) * PAGE_SIZE);
>> cond_resched();
>> - clear_user_highpage(page + i, addr + i * PAGE_SIZE);
>> + clear_user_highpage(page + base + 2 * l - 1 - i,
>> + addr + (base + 2 * l - 1 - i) * PAGE_SIZE);
>
> Please document this design with a carefully written code comment.
> For example, why was "2 * n" chosen? What is it trying to achieve?

Sure.

"2 * n" here is to determine whether addr_hint is in the first half (2 *
n <= pages_per_huge_page) or the second half (2 * n >
pages_per_huge_page) of the huge page.

> Also, the final clearing loop "for (i = 0; i < l; i++)" might cause
> eviction of data which was cached in the previous loop. Perhaps some
> additional gains will be made by clearing the hugepage in a
> left-right-left-right "start from the ends and work inwards" manner, if
> you see what I mean. So the 4k pages immediately surrounding addr_hint
> are the most-recently-cleared. Although accesses to the data at lower
> addresses than addr_hint are probably somewhat rare (and may be
> nonexistent in your synthetic test case).

Yes. I think I have done exactly this in the patch. For each iteration
of the loop, two sub-pages will be cleared: base + i, and base + 2 * l -
1 - i, that is, the left and right of the fault sub-page, and finally
reach the fault sub-page as the last sub-page to clear.

Best Regards,
Huang, Ying

\
 
 \ /
  Last update: 2017-08-10 02:59    [W:0.126 / U:0.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site