lkml.org 
[lkml]   [2021]   [Sep]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.10 034/306] dm crypt: Avoid percpu_counter spinlock contention in crypt_page_alloc()
    Date
    From: Arne Welzel <arne.welzel@corelight.com>

    commit 528b16bfc3ae5f11638e71b3b63a81f9999df727 upstream.

    On systems with many cores using dm-crypt, heavy spinlock contention in
    percpu_counter_compare() can be observed when the page allocation limit
    for a given device is reached or close to be reached. This is due
    to percpu_counter_compare() taking a spinlock to compute an exact
    result on potentially many CPUs at the same time.

    Switch to non-exact comparison of allocated and allowed pages by using
    the value returned by percpu_counter_read_positive() to avoid taking
    the percpu_counter spinlock.

    This may over/under estimate the actual number of allocated pages by at
    most (batch-1) * num_online_cpus().

    Currently, batch is bounded by 32. The system on which this issue was
    first observed has 256 CPUs and 512GB of RAM. With a 4k page size, this
    change may over/under estimate by 31MB. With ~10G (2%) allowed dm-crypt
    allocations, this seems an acceptable error. Certainly preferred over
    running into the spinlock contention.

    This behavior was reproduced on an EC2 c5.24xlarge instance with 96 CPUs
    and 192GB RAM as follows, but can be provoked on systems with less CPUs
    as well.

    * Disable swap
    * Tune vm settings to promote regular writeback
    $ echo 50 > /proc/sys/vm/dirty_expire_centisecs
    $ echo 25 > /proc/sys/vm/dirty_writeback_centisecs
    $ echo $((128 * 1024 * 1024)) > /proc/sys/vm/dirty_background_bytes

    * Create 8 dmcrypt devices based on files on a tmpfs
    * Create and mount an ext4 filesystem on each crypt devices
    * Run stress-ng --hdd 8 within one of above filesystems

    Total %system usage collected from sysstat goes to ~35%. Write throughput
    on the underlying loop device is ~2GB/s. perf profiling an individual
    kworker kcryptd thread shows the following profile, indicating spinlock
    contention in percpu_counter_compare():

    99.98% 0.00% kworker/u193:46 [kernel.kallsyms] [k] ret_from_fork
    |
    --ret_from_fork
    kthread
    worker_thread
    |
    --99.92%--process_one_work
    |
    |--80.52%--kcryptd_crypt
    | |
    | |--62.58%--mempool_alloc
    | | |
    | | --62.24%--crypt_page_alloc
    | | |
    | | --61.51%--__percpu_counter_compare
    | | |
    | | --61.34%--__percpu_counter_sum
    | | |
    | | |--58.68%--_raw_spin_lock_irqsave
    | | | |
    | | | --58.30%--native_queued_spin_lock_slowpath
    | | |
    | | --0.69%--cpumask_next
    | | |
    | | --0.51%--_find_next_bit
    | |
    | |--10.61%--crypt_convert
    | | |
    | | |--6.05%--xts_crypt
    ...

    After applying this patch and running the same test, %system usage is
    lowered to ~7% and write throughput on the loop device increases
    to ~2.7GB/s. perf report shows mempool_alloc() as ~8% rather than ~62%
    in the profile and not hitting the percpu_counter() spinlock anymore.

    |--8.15%--mempool_alloc
    | |
    | |--3.93%--crypt_page_alloc
    | | |
    | | --3.75%--__alloc_pages
    | | |
    | | --3.62%--get_page_from_freelist
    | | |
    | | --3.22%--rmqueue_bulk
    | | |
    | | --2.59%--_raw_spin_lock
    | | |
    | | --2.57%--native_queued_spin_lock_slowpath
    | |
    | --3.05%--_raw_spin_lock_irqsave
    | |
    | --2.49%--native_queued_spin_lock_slowpath

    Suggested-by: DJ Gregor <dj@corelight.com>
    Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
    Signed-off-by: Arne Welzel <arne.welzel@corelight.com>
    Fixes: 5059353df86e ("dm crypt: limit the number of allocated pages")
    Cc: stable@vger.kernel.org
    Signed-off-by: Mike Snitzer <snitzer@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    drivers/md/dm-crypt.c | 7 ++++++-
    1 file changed, 6 insertions(+), 1 deletion(-)

    --- a/drivers/md/dm-crypt.c
    +++ b/drivers/md/dm-crypt.c
    @@ -2643,7 +2643,12 @@ static void *crypt_page_alloc(gfp_t gfp_
    struct crypt_config *cc = pool_data;
    struct page *page;

    - if (unlikely(percpu_counter_compare(&cc->n_allocated_pages, dm_crypt_pages_per_client) >= 0) &&
    + /*
    + * Note, percpu_counter_read_positive() may over (and under) estimate
    + * the current usage by at most (batch - 1) * num_online_cpus() pages,
    + * but avoids potential spinlock contention of an exact result.
    + */
    + if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) &&
    likely(gfp_mask & __GFP_NORETRY))
    return NULL;


    \
     
     \ /
      Last update: 2021-09-16 18:09    [W:3.207 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site