lkml.org 
[lkml]   [2009]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP
    From: Jan Blunck <jblunck@suse.de>

    _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
    enabled. Currently, on UP we unconditionally spin_lock() first, which
    calls __spin_lock_debug(), which takes the lock unconditionally even
    on UP. This will deadlock in situations in which we call
    atomic_dec_and_lock() knowing that the counter won't go to zero
    (because we hold another reference) and that we already hold the lock.
    Instead, we should use the SMP code path which only takes the lock if
    necessary.

    Signed-off-by: Jan Blunck <jblunck@suse.de>
    Signed-off-by: Valerie Aurora (Henson) <vaurora@redhat.com>
    ---
    lib/dec_and_lock.c | 3 +--
    1 files changed, 1 insertions(+), 2 deletions(-)

    diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
    index a65c314..e73822a 100644
    --- a/lib/dec_and_lock.c
    +++ b/lib/dec_and_lock.c
    @@ -19,11 +19,10 @@
    */
    int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
    {
    -#ifdef CONFIG_SMP
    /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
    if (atomic_add_unless(atomic, -1, 1))
    return 0;
    -#endif
    +
    /* Otherwise do it the slow way */
    spin_lock(lock);
    if (atomic_dec_and_test(atomic))
    --
    1.6.0.6


    \
     
     \ /
      Last update: 2009-06-15 20:15    [W:0.023 / U:30.772 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site