lkml.org 
[lkml]   [2017]   [Mar]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v6 07/15] lockdep: Avoid adding redundant direct links of crosslocks
    Date
    On my machine (QEMU x86_64, 4 core, mem 512M, enable-kvm), this patch
    does not make different between before/after in lockdep_stats. So this
    patch looks unnecessary. However, I wonder if it's still true in other
    systems. Could anybody check lockdep_stats in your system?

    Before (apply all crossrelease patches except this patch):

    lock-classes: 988 [max: 8191]
    direct dependencies: 5814 [max: 32768]
    indirect dependencies: 18915
    all direct dependencies: 119802
    dependency chains: 6350 [max: 65536]
    dependency chain hlocks: 20771 [max: 327680]
    in-hardirq chains: 52
    in-softirq chains: 361
    in-process chains: 5937
    stack-trace entries: 80396 [max: 524288]
    combined max dependencies: 113926468
    hardirq-safe locks: 42
    hardirq-unsafe locks: 644
    softirq-safe locks: 129
    softirq-unsafe locks: 561
    irq-safe locks: 135
    irq-unsafe locks: 644
    hardirq-read-safe locks: 2
    hardirq-read-unsafe locks: 127
    softirq-read-safe locks: 11
    softirq-read-unsafe locks: 119
    irq-read-safe locks: 12
    irq-read-unsafe locks: 127
    uncategorized locks: 165
    unused locks: 1
    max locking depth: 14
    max bfs queue depth: 168
    debug_locks: 1

    After (apply all crossrelease patches without exception):

    lock-classes: 980 [max: 8191]
    direct dependencies: 5604 [max: 32768]
    indirect dependencies: 18517
    all direct dependencies: 112620
    dependency chains: 6215 [max: 65536]
    dependency chain hlocks: 20401 [max: 327680]
    in-hardirq chains: 51
    in-softirq chains: 298
    in-process chains: 5866
    stack-trace entries: 78707 [max: 524288]
    combined max dependencies: 91220116
    hardirq-safe locks: 42
    hardirq-unsafe locks: 637
    softirq-safe locks: 117
    softirq-unsafe locks: 561
    irq-safe locks: 126
    irq-unsafe locks: 637
    hardirq-read-safe locks: 2
    hardirq-read-unsafe locks: 127
    softirq-read-safe locks: 10
    softirq-read-unsafe locks: 119
    irq-read-safe locks: 11
    irq-read-unsafe locks: 127
    uncategorized locks: 165
    unused locks: 1
    max locking depth: 15
    max bfs queue depth: 168
    debug_locks: 1

    -----8<-----
    From 803e905a4cbf6c10b776a9e272a3bda9e3ffaa95 Mon Sep 17 00:00:00 2001
    From: Byungchul Park <byungchul.park@lge.com>
    Date: Tue, 14 Mar 2017 14:59:54 +0900
    Subject: [PATCH v6 07/15] lockdep: Avoid adding redundant direct links of
    crosslocks

    We can skip adding a dependency 'AX -> B', in case that we ensure 'AX ->
    the previous of B in hlocks' to be created, where AX is a crosslock and
    B is a typical lock. Remember that two adjacent locks in hlocks generate
    a dependency like 'prev -> next', that is, 'the previous of B in hlocks
    -> B' in this case.

    For example:

    in hlocks[]
    ------------
    ^ A (gen_id: 4) --+
    | | previous gen_id
    | B (gen_id: 3) <-+
    | C (gen_id: 3)
    | D (gen_id: 2)
    oldest | E (gen_id: 1)

    in xhlocks[]
    ------------
    ^ A (gen_id: 4, prev_gen_id: 3(B's gen id))
    | B (gen_id: 3, prev_gen_id: 3(C's gen id))
    | C (gen_id: 3, prev_gen_id: 2(D's gen id))
    | D (gen_id: 2, prev_gen_id: 1(E's gen id))
    oldest | E (gen_id: 1, prev_gen_id: NA)

    On commit for a crosslock AX(gen_id = 3), it's engough to add 'AX -> C',
    but adding 'AX -> B' and 'AX -> A' is unnecessary since 'AX -> C', 'C ->
    B' and 'B -> A' cover them, which are guaranteed to be generated.

    This patch intoduces a variable, prev_gen_id, to avoid adding this kind
    of redundant dependencies. In other words, the previous in hlocks will
    anyway handle it if the previous's gen_id >= the crosslock's gen_id.

    Signed-off-by: Byungchul Park <byungchul.park@lge.com>
    ---
    include/linux/lockdep.h | 11 +++++++++++
    kernel/locking/lockdep.c | 30 ++++++++++++++++++++++++++++--
    2 files changed, 39 insertions(+), 2 deletions(-)

    diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
    index 5356f71..31c6289 100644
    --- a/include/linux/lockdep.h
    +++ b/include/linux/lockdep.h
    @@ -284,6 +284,17 @@ struct held_lock {
    */
    struct hist_lock {
    /*
    + * We can skip adding a dependency 'a target crosslock -> this
    + * lock', in case that we ensure 'the target crosslock -> the
    + * previous lock in held_locks' to be created. Remember that
    + * 'the previous lock in held_locks -> this lock' is guaranteed
    + * to be created, and 'A -> B' and 'B -> C' cover 'A -> C'.
    + *
    + * Keep the previous's gen_id to make the decision.
    + */
    + unsigned int prev_gen_id;
    +
    + /*
    * Each work of workqueue might run in a different context,
    * thanks to concurrency support of workqueue. So we have to
    * distinguish each work to avoid false positive.
    diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
    index ec4f6af..c78dd9d 100644
    --- a/kernel/locking/lockdep.c
    +++ b/kernel/locking/lockdep.c
    @@ -4716,13 +4716,18 @@ static inline int xhlock_used(struct hist_lock *xhlock)
    /*
    * Only access local task's data, so irq disable is only required.
    */
    -static void add_xhlock(struct held_lock *hlock)
    +static void add_xhlock(struct held_lock *hlock, unsigned int prev_gen_id)
    {
    unsigned int idx = current->xhlock_idx++;
    struct hist_lock *xhlock = &xhlock(idx);

    /* Initialize hist_lock's members */
    xhlock->hlock = *hlock;
    + /*
    + * prev_gen_id is used to skip adding redundant dependencies,
    + * which can be covered by the previous lock in held_locks.
    + */
    + xhlock->prev_gen_id = prev_gen_id;
    xhlock->work_id = current->work_id;

    xhlock->trace.nr_entries = 0;
    @@ -4761,10 +4766,30 @@ static int same_context_xhlock(struct hist_lock *xhlock)
    */
    static void check_add_xhlock(struct held_lock *hlock)
    {
    + struct held_lock *prev;
    + struct held_lock *start;
    + unsigned int gen_id;
    + unsigned int gen_id_invalid;
    +
    if (!current->xhlocks || !depend_before(hlock))
    return;

    - add_xhlock(hlock);
    + gen_id = (unsigned int)atomic_read(&cross_gen_id);
    + /*
    + * gen_id_invalid should be old enough to be invalid.
    + * Current gen_id - (UINIT_MAX / 4) would be a good
    + * value to meet it.
    + */
    + gen_id_invalid = gen_id - (UINT_MAX / 4);
    + start = current->held_locks;
    +
    + for (prev = hlock - 1; prev >= start &&
    + !depend_before(prev); prev--);
    +
    + if (prev < start)
    + add_xhlock(hlock, gen_id_invalid);
    + else if (prev->gen_id != gen_id)
    + add_xhlock(hlock, prev->gen_id);
    }

    /*
    @@ -4902,6 +4927,7 @@ static int commit_xhlocks(struct cross_lock *xlock)
    break;

    if (same_context_xhlock(xhlock) &&
    + before(xhlock->prev_gen_id, xlock->hlock.gen_id) &&
    !commit_xhlock(xlock, xhlock))
    return 0;
    }
    --
    1.9.1
    \
     
     \ /
      Last update: 2017-03-14 09:27    [W:4.172 / U:0.052 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site