lkml.org 
[lkml]   [2020]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    Subject[PATCH] READ_ONCE, WRITE_ONCE, kcsan: Perform checks in __*_ONCE variants
    From
    If left plain, using __READ_ONCE and __WRITE_ONCE will result in many
    false positives with KCSAN due to being instrumented normally. To fix,
    we should move the kcsan_check and data_race into __*_ONCE.

    Cc: Will Deacon <will@kernel.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Paul E. McKenney <paulmck@kernel.org>
    Cc: Ingo Molnar <mingo@kernel.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Marco Elver <elver@google.com>
    ---
    A proposal to fix the problem with __READ_ONCE/__WRITE_ONCE and KCSAN
    false positives.

    Will, please feel free to take this patch and fiddle with it until it
    looks like what you want if this is completely off.

    Note: Currently __WRITE_ONCE_SCALAR seems to serve no real purpose. Do
    we still need it?
    ---
    include/linux/compiler.h | 15 +++++++++------
    1 file changed, 9 insertions(+), 6 deletions(-)

    diff --git a/include/linux/compiler.h b/include/linux/compiler.h
    index 741c93c62ecf..e902ca5de811 100644
    --- a/include/linux/compiler.h
    +++ b/include/linux/compiler.h
    @@ -224,13 +224,16 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
    * atomicity or dependency ordering guarantees. Note that this may result
    * in tears!
    */
    -#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x))
    +#define __READ_ONCE(x) \
    +({ \
    + kcsan_check_atomic_read(&(x), sizeof(x)); \
    + data_race((*(const volatile __unqual_scalar_typeof(x) *)&(x))); \
    +})

    #define __READ_ONCE_SCALAR(x) \
    ({ \
    typeof(x) *__xp = &(x); \
    - __unqual_scalar_typeof(x) __x = data_race(__READ_ONCE(*__xp)); \
    - kcsan_check_atomic_read(__xp, sizeof(*__xp)); \
    + __unqual_scalar_typeof(x) __x = __READ_ONCE(*__xp); \
    smp_read_barrier_depends(); \
    (typeof(x))__x; \
    })
    @@ -243,14 +246,14 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,

    #define __WRITE_ONCE(x, val) \
    do { \
    - *(volatile typeof(x) *)&(x) = (val); \
    + kcsan_check_atomic_write(&(x), sizeof(x)); \
    + data_race(*(volatile typeof(x) *)&(x) = (val)); \
    } while (0)

    #define __WRITE_ONCE_SCALAR(x, val) \
    do { \
    typeof(x) *__xp = &(x); \
    - kcsan_check_atomic_write(__xp, sizeof(*__xp)); \
    - data_race(({ __WRITE_ONCE(*__xp, val); 0; })); \
    + __WRITE_ONCE(*__xp, val); \
    } while (0)

    #define WRITE_ONCE(x, val) \
    --
    2.26.2.645.ge9eca65c58-goog
    \
     
     \ /
      Last update: 2020-05-12 20:40    [W:3.691 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site