lkml.org 
[lkml]   [2020]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.14 045/166] seqlock: Require WRITE_ONCE surrounding raw_seqcount_barrier
    Date
    From: Marco Elver <elver@google.com>

    [ Upstream commit bf07132f96d426bcbf2098227fb680915cf44498 ]

    This patch proposes to require marked atomic accesses surrounding
    raw_write_seqcount_barrier. We reason that otherwise there is no way to
    guarantee propagation nor atomicity of writes before/after the barrier
    [1]. For example, consider the compiler tears stores either before or
    after the barrier; in this case, readers may observe a partial value,
    and because readers are unaware that writes are going on (writes are not
    in a seq-writer critical section), will complete the seq-reader critical
    section while having observed some partial state.
    [1] https://lwn.net/Articles/793253/

    This came up when designing and implementing KCSAN, because KCSAN would
    flag these accesses as data-races. After careful analysis, our reasoning
    as above led us to conclude that the best thing to do is to propose an
    amendment to the raw_seqcount_barrier usage.

    Signed-off-by: Marco Elver <elver@google.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    include/linux/seqlock.h | 11 +++++++++--
    1 file changed, 9 insertions(+), 2 deletions(-)

    diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
    index f189a8a3bbb88..7b3b5d05ab0de 100644
    --- a/include/linux/seqlock.h
    +++ b/include/linux/seqlock.h
    @@ -243,6 +243,13 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
    * usual consistency guarantee. It is one wmb cheaper, because we can
    * collapse the two back-to-back wmb()s.
    *
    + * Note that, writes surrounding the barrier should be declared atomic (e.g.
    + * via WRITE_ONCE): a) to ensure the writes become visible to other threads
    + * atomically, avoiding compiler optimizations; b) to document which writes are
    + * meant to propagate to the reader critical section. This is necessary because
    + * neither writes before and after the barrier are enclosed in a seq-writer
    + * critical section that would ensure readers are aware of ongoing writes.
    + *
    * seqcount_t seq;
    * bool X = true, Y = false;
    *
    @@ -262,11 +269,11 @@ static inline void raw_write_seqcount_end(seqcount_t *s)
    *
    * void write(void)
    * {
    - * Y = true;
    + * WRITE_ONCE(Y, true);
    *
    * raw_write_seqcount_barrier(seq);
    *
    - * X = false;
    + * WRITE_ONCE(X, false);
    * }
    */
    static inline void raw_write_seqcount_barrier(seqcount_t *s)
    --
    2.25.1


    \
     
     \ /
      Last update: 2020-09-29 14:41    [W:4.075 / U:0.360 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site