lkml.org 
[lkml]   [2007]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    Subject[PATCH 6/8] i386: bitops: Don't mark memory as clobbered unnecessarily
    From: Satyam Sharma <ssatyam@cse.iitk.ac.in>

    [6/8] i386: bitops: Don't mark memory as clobbered unnecessarily

    The goal is to let gcc generate good, beautiful, optimized code.

    But test_and_set_bit, test_and_clear_bit, __test_and_change_bit,
    and test_and_change_bit unnecessarily mark all of memory as clobbered,
    thereby preventing gcc from doing perfectly valid optimizations.

    The case of __test_and_change_bit() is particularly surprising, given
    that it's a variant where we don't make any guarantees at all.

    Even for the other three cases, the (only) instruction that accesses
    shared memory is btsl/btrl/btcl and requires locking and atomicity.
    But we handle that properly already by the use of the lock prefix.
    Also, "=m" is already specified in the output operands of the asm
    for the passed bit-string pointer, so the gcc optimizer knows what
    to do and what not to do anyway.

    So there's no point clobbering all of memory in these functions.

    Signed-off-by: Satyam Sharma <ssatyam@cse.iitk.ac.in>
    Cc: David Howells <dhowells@redhat.com>
    Cc: Nick Piggin <nickpiggin@yahoo.com.au>

    ---

    include/asm-i386/bitops.h | 8 ++++----
    1 files changed, 4 insertions(+), 4 deletions(-)

    diff --git a/include/asm-i386/bitops.h b/include/asm-i386/bitops.h
    index 0f5634b..f37b8a2 100644
    --- a/include/asm-i386/bitops.h
    +++ b/include/asm-i386/bitops.h
    @@ -164,7 +164,7 @@ static inline int test_and_set_bit(int nr, unsigned long *addr)
    __asm__ __volatile__( LOCK_PREFIX
    "btsl %2,%1\n\tsbbl %0,%0"
    :"=r" (oldbit),"=m" (*addr)
    - :"r" (nr) : "memory");
    + :"r" (nr));
    return oldbit;
    }

    @@ -208,7 +208,7 @@ static inline int test_and_clear_bit(int nr, unsigned long *addr)
    __asm__ __volatile__( LOCK_PREFIX
    "btrl %2,%1\n\tsbbl %0,%0"
    :"=r" (oldbit),"=m" (*addr)
    - :"r" (nr) : "memory");
    + :"r" (nr));
    return oldbit;
    }

    @@ -254,7 +254,7 @@ static inline int __test_and_change_bit(int nr, unsigned long *addr)
    __asm__ __volatile__(
    "btcl %2,%1\n\tsbbl %0,%0"
    :"=r" (oldbit),"=m" (*addr)
    - :"r" (nr) : "memory");
    + :"r" (nr));
    return oldbit;
    }

    @@ -275,7 +275,7 @@ static inline int test_and_change_bit(int nr, unsigned long *addr)
    __asm__ __volatile__( LOCK_PREFIX
    "btcl %2,%1\n\tsbbl %0,%0"
    :"=r" (oldbit),"=m" (*addr)
    - :"r" (nr) : "memory");
    + :"r" (nr));
    return oldbit;
    }

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/
    \
     
     \ /
      Last update: 2007-07-23 18:09    [W:0.028 / U:0.088 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site