lkml.org 
[lkml]   [2020]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 07/16] locking/bitspinlock: Cleanup PREEMPT_COUNT leftovers
    Date
    From: Thomas Gleixner <tglx@linutronix.de>

    CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
    removed. Cleanup the leftovers before doing so.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Will Deacon <will@kernel.org>
    Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    ---
    include/linux/bit_spinlock.h | 4 +---
    1 file changed, 1 insertion(+), 3 deletions(-)

    diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h
    index bbc4730a6505..1e03d54b0b6f 100644
    --- a/include/linux/bit_spinlock.h
    +++ b/include/linux/bit_spinlock.h
    @@ -90,10 +90,8 @@ static inline int bit_spin_is_locked(int bitnum, unsigned long *addr)
    {
    #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
    return test_bit(bitnum, addr);
    -#elif defined CONFIG_PREEMPT_COUNT
    - return preempt_count();
    #else
    - return 1;
    + return preempt_count();
    #endif
    }

    --
    2.20.1
    \
     
     \ /
      Last update: 2020-10-29 17:51    [W:5.775 / U:0.684 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site