lkml.org 
[lkml]   [2011]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 2/2] hugepage: Allow parallelization of the hugepage fault path
    On Fri, 15 Jul 2011, Anton Blanchard wrote:

    > From: David Gibson <dwg@au1.ibm.com>
    >
    > At present, the page fault path for hugepages is serialized by a
    > single mutex. This is used to avoid spurious out-of-memory conditions
    > when the hugepage pool is fully utilized (two processes or threads can
    > race to instantiate the same mapping with the last hugepage from the
    > pool, the race loser returning VM_FAULT_OOM). This problem is
    > specific to hugepages, because it is normal to want to use every
    > single hugepage in the system - with normal pages we simply assume
    > there will always be a few spare pages which can be used temporarily
    > until the race is resolved.
    >
    > Unfortunately this serialization also means that clearing of hugepages
    > cannot be parallelized across multiple CPUs, which can lead to very
    > long process startup times when using large numbers of hugepages.
    >
    > This patch improves the situation by replacing the single mutex with a
    > table of mutexes, selected based on a hash of the address_space and
    > file offset being faulted (or mm and virtual address for MAP_PRIVATE
    > mappings).
    >
    > From: Anton Blanchard <anton@samba.org>
    >
    > Forward ported and made a few changes:
    >
    > - Use the Jenkins hash to scatter the hash, better than using just the
    > low bits.
    >
    > - Always round num_fault_mutexes to a power of two to avoid an
    > expensive modulus in the hash calculation.
    >
    > I also tested this patch on a large POWER7 box using a simple parallel
    > fault testcase:
    >
    > http://ozlabs.org/~anton/junkcode/parallel_fault.c
    >
    > Command line options:
    >
    > parallel_fault <nr_threads> <size in kB> <skip in kB>
    >
    >
    > First the time taken to fault 128GB of 16MB hugepages:
    >
    > # time hugectl --heap ./parallel_fault 1 134217728 16384
    > 40.68 seconds
    >
    > Now the same test with 64 concurrent threads:
    > # time hugectl --heap ./parallel_fault 64 134217728 16384
    > 39.34 seconds
    >
    > Hardly any speedup. Finally the 64 concurrent threads test with
    > this patch applied:
    > # time hugectl --heap ./parallel_fault 64 134217728 16384
    > 0.85 seconds
    >
    > We go from 40.68 seconds to 0.85 seconds, an improvement of 47.9x
    >
    > This was tested with the libhugetlbfs test suite, and the PASS/FAIL
    > count was the same before and after this patch.
    >
    >
    > Signed-off-by: David Gibson <dwg@au1.ibm.com>
    > Signed-off-by: Anton Blanchard <anton@samba.org>

    Tested-by: Eric B Munson <emunson@mgebm.net>
    [unhandled content-type:application/pgp-signature]
    \
     
     \ /
      Last update: 2011-07-18 17:27    [W:0.046 / U:0.412 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site