lkml.org 
[lkml]   [2015]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] mm: kmemleak: Optimise kmemleak_lock acquiring during kmemleak_scan
On Mon, Jun 08, 2015 at 06:06:59PM +0100, Catalin Marinas wrote:
> The kmemleak memory scanning uses finer grained object->lock spinlocks
> primarily to avoid races with the memory block freeing. However, the
> pointer lookup in the rb tree requires the kmemleak_lock to be held.
> This is currently done in the find_and_get_object() function for each
> pointer-like location read during scanning. While this allows a low
> latency on kmemleak_*() callbacks on other CPUs, the memory scanning is
> slower.
>
> This patch moves the kmemleak_lock outside the core scan_block()
> function allowing the spinlock to be acquired/released only once per
> scanned memory block rather than individual pointer-like values. The
> memory scanning performance is significantly improved (by an order of
> magnitude on an arm64 system).
>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> ---
>
> Andrew,
>
> While sorting out some of the kmemleak disabling races, I realised that
> kmemleak scanning performance can be improved. On an arm64 system I
> tested (albeit not a fast one but with 6 CPUs and 8GB of RAM),
> immediately after boot an "time echo scan > /sys/kernel/debug/kmemleak"
> takes on average 70 sec. With this patch applied, I get on average 4.7
> sec.

I need to make a correction here as I forgot lock proving enabled in my
.config when running the tests. With all the spinlock debugging
disabled, I get 9.5 sec vs 3.5 sec. Still an improvement but no longer
by an order of magnitude.

--
Catalin


\
 
 \ /
  Last update: 2015-06-08 20:01    [W:0.045 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site