Messages in this thread | | | Date | Fri, 03 Jul 2009 11:20:20 +0200 | From | Eric Dumazet <> | Subject | Re: [PATCHv5 2/2] memory barrier: adding smp_mb__after_lock |
| |
Ingo Molnar a écrit : > * Jiri Olsa <jolsa@redhat.com> wrote: > >> +++ b/arch/x86/include/asm/spinlock.h >> @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw) >> #define _raw_read_relax(lock) cpu_relax() >> #define _raw_write_relax(lock) cpu_relax() >> >> +/* The {read|write|spin}_lock() on x86 are full memory barriers. */ >> +#define smp_mb__after_lock() do { } while (0) > > Two small stylistic comments, please make this an inline function: > > static inline void smp_mb__after_lock(void) { } > #define smp_mb__after_lock > > (untested) > >> +/* The lock does not imply full memory barrier. */ >> +#ifndef smp_mb__after_lock >> +#define smp_mb__after_lock() smp_mb() >> +#endif > > ditto. > > Ingo
This was following existing implementations of various smp_mb__??? helpers :
# grep -4 smp_mb__before_clear_bit include/asm-generic/bitops.h
/* * clear_bit may not imply a memory barrier */ #ifndef smp_mb__before_clear_bit #define smp_mb__before_clear_bit() smp_mb() #define smp_mb__after_clear_bit() smp_mb() #endif
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |