lkml.org 
[lkml]   [2019]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: rcu_read_lock lost its compiler barrier
    Date
    From: Paul E. McKenney
    > Sent: 03 June 2019 09:42
    ...
    > On kissing the kernel goodbye, a reasonable strategy might be to
    > identify the transformations that are actually occuring (like the
    > stores of certain constants called out above) and fix those.

    > We do
    > occasionally use READ_ONCE() to prevent load-fusing optimizations that
    > would otherwise cause the compiler to turn while-loops into if-statements
    > guarding infinite loops.

    In that case the variable ought to be volatile...

    > There is also the possibility of having the
    > compiler guys give us more command-line arguments.

    I wonder how much the code size (of anything) would increase
    if the compiler:
    1) Never read a value into a local more than once.
    2) Never write a value that wasn't requested by the code.
    3) Never used multiple memory accesses for 'machine word' (and
    smaller) items.

    (1) makes all reads READ_ONCE() except that the actual read
    can be delayed until further down the code.
    If I have a naive #define bswap32() I'd expect:
    v = bswap32(foo->bar)
    to possibly read foo->bar multiple times, but not:
    int foo_bar = foo->bar;
    v = bswap32(foo_bar);

    (2) makes all writes WRITE_ONCE() except that if there are
    multiple writes to the same location, only the last need
    be done.
    In particular it stops speculative writes and the use of
    locations that are going to be written to as temporaries.
    It also stop foo->bar = ~0; being implemented as a clear
    then decrement.

    (3) I'd never imagined the compiler would write the two halves
    of a word separately!

    If the compiler behaved like that (as one might expect it would)
    then READ_ONCE() would be a READ_NOW() for when the sequencing
    mattered.

    I was also recently surprised by the code I got from this loop:
    for (i = 0; i < limit; i++)
    sum64 += array32[i];
    (as in the IP checksum sum without add carry support).
    The compiler unrolled it to used horrid sequences of sse3/avx
    instructions.
    This might be a gain for large enough buffers and 'hot cache'
    but for small buffer and likely cold cache it is horrid.
    I guess such optimisations are valid, but I wonder how often
    they are an actual win for real programs.

    David

    -
    Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
    Registration No: 1397386 (Wales)

    \
     
     \ /
      Last update: 2019-06-03 17:27    [W:2.231 / U:0.076 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site