lkml.org 
[lkml]   [2017]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 2/3] asm-generic, x86: wrap atomic operations
On Fri, Mar 24, 2017 at 7:52 AM, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Dmitry Vyukov <dvyukov@google.com> wrote:
>
>> KASAN uses compiler instrumentation to intercept all memory accesses.
>> But it does not see memory accesses done in assembly code.
>> One notable user of assembly code is atomic operations. Frequently,
>> for example, an atomic reference decrement is the last access to an
>> object and a good candidate for a racy use-after-free.
>>
>> Atomic operations are defined in arch files, but KASAN instrumentation
>> is required for several archs that support KASAN. Later we will need
>> similar hooks for KMSAN (uninit use detector) and KTSAN (data race
>> detector).
>>
>> This change introduces wrappers around atomic operations that can be
>> used to add KASAN/KMSAN/KTSAN instrumentation across several archs.
>> This patch uses the wrappers only for x86 arch. Arm64 will be switched
>> later.
>>
>> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Will Deacon <will.deacon@arm.com>,
>> Cc: Andrew Morton <akpm@linux-foundation.org>,
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
>> Cc: Ingo Molnar <mingo@redhat.com>,
>> Cc: kasan-dev@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: x86@kernel.org
>> ---
>> arch/x86/include/asm/atomic.h | 100 +++++++-------
>> arch/x86/include/asm/atomic64_32.h | 86 ++++++------
>> arch/x86/include/asm/atomic64_64.h | 90 ++++++-------
>> arch/x86/include/asm/cmpxchg.h | 12 +-
>> arch/x86/include/asm/cmpxchg_32.h | 8 +-
>> arch/x86/include/asm/cmpxchg_64.h | 4 +-
>> include/asm-generic/atomic-instrumented.h | 210 ++++++++++++++++++++++++++++++
>> 7 files changed, 367 insertions(+), 143 deletions(-)
>
> Ugh, that's disgusting really...
>
>>
>> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
>> index 14635c5ea025..95dd167eb3af 100644
>> --- a/arch/x86/include/asm/atomic.h
>> +++ b/arch/x86/include/asm/atomic.h
>> @@ -16,36 +16,46 @@
>> #define ATOMIC_INIT(i) { (i) }
>>
>> /**
>> - * atomic_read - read atomic variable
>> + * arch_atomic_read - read atomic variable
>> * @v: pointer of type atomic_t
>> *
>> * Atomically reads the value of @v.
>> */
>> -static __always_inline int atomic_read(const atomic_t *v)
>> +static __always_inline int arch_atomic_read(const atomic_t *v)
>> {
>> - return READ_ONCE((v)->counter);
>> + /*
>> + * We use READ_ONCE_NOCHECK() because atomic_read() contains KASAN
>> + * instrumentation. Double instrumentation is unnecessary.
>> + */
>> + return READ_ONCE_NOCHECK((v)->counter);
>> }

Hello Ingo,

> Firstly, the patch is way too large, please split off new the documentation parts
> of the patch to reduce the size and to make it easier to read!
>
> Secondly, the next patch should do the rename to arch_atomic_*() pattern - and
> nothing else:

Next after what? Please provide full list of patches as you see them.
How do we avoid build breakage if we do only the rename in a separate patch?



>> /**
>> - * atomic_set - set atomic variable
>> + * arch_atomic_set - set atomic variable
>> * @v: pointer of type atomic_t
>> * @i: required value
>> *
>> * Atomically sets the value of @v to @i.
>> */
>> -static __always_inline void atomic_set(atomic_t *v, int i)
>> +static __always_inline void arch_atomic_set(atomic_t *v, int i)
>
>
> Third, the prototype CPP complications:
>
>> +#define __INSTR_VOID1(op, sz) \
>> +static __always_inline void atomic##sz##_##op(atomic##sz##_t *v) \
>> +{ \
>> + arch_atomic##sz##_##op(v); \
>> +}
>> +
>> +#define INSTR_VOID1(op) \
>> +__INSTR_VOID1(op,); \
>> +__INSTR_VOID1(op, 64)
>> +
>> +INSTR_VOID1(inc);
>> +INSTR_VOID1(dec);
>> +
>> +#undef __INSTR_VOID1
>> +#undef INSTR_VOID1
>> +
>> +#define __INSTR_VOID2(op, sz, type) \
>> +static __always_inline void atomic##sz##_##op(type i, atomic##sz##_t *v)\
>> +{ \
>> + arch_atomic##sz##_##op(i, v); \
>> +}
>> +
>> +#define INSTR_VOID2(op) \
>> +__INSTR_VOID2(op, , int); \
>> +__INSTR_VOID2(op, 64, long long)
>> +
>> +INSTR_VOID2(add);
>> +INSTR_VOID2(sub);
>> +INSTR_VOID2(and);
>> +INSTR_VOID2(or);
>> +INSTR_VOID2(xor);
>> +
>> +#undef __INSTR_VOID2
>> +#undef INSTR_VOID2
>> +
>> +#define __INSTR_RET1(op, sz, type, rtype) \
>> +static __always_inline rtype atomic##sz##_##op(atomic##sz##_t *v) \
>> +{ \
>> + return arch_atomic##sz##_##op(v); \
>> +}
>> +
>> +#define INSTR_RET1(op) \
>> +__INSTR_RET1(op, , int, int); \
>> +__INSTR_RET1(op, 64, long long, long long)
>> +
>> +INSTR_RET1(inc_return);
>> +INSTR_RET1(dec_return);
>> +__INSTR_RET1(inc_not_zero, 64, long long, long long);
>> +__INSTR_RET1(dec_if_positive, 64, long long, long long);
>> +
>> +#define INSTR_RET_BOOL1(op) \
>> +__INSTR_RET1(op, , int, bool); \
>> +__INSTR_RET1(op, 64, long long, bool)
>> +
>> +INSTR_RET_BOOL1(dec_and_test);
>> +INSTR_RET_BOOL1(inc_and_test);
>> +
>> +#undef __INSTR_RET1
>> +#undef INSTR_RET1
>> +#undef INSTR_RET_BOOL1
>> +
>> +#define __INSTR_RET2(op, sz, type, rtype) \
>> +static __always_inline rtype atomic##sz##_##op(type i, atomic##sz##_t *v) \
>> +{ \
>> + return arch_atomic##sz##_##op(i, v); \
>> +}
>> +
>> +#define INSTR_RET2(op) \
>> +__INSTR_RET2(op, , int, int); \
>> +__INSTR_RET2(op, 64, long long, long long)
>> +
>> +INSTR_RET2(add_return);
>> +INSTR_RET2(sub_return);
>> +INSTR_RET2(fetch_add);
>> +INSTR_RET2(fetch_sub);
>> +INSTR_RET2(fetch_and);
>> +INSTR_RET2(fetch_or);
>> +INSTR_RET2(fetch_xor);
>> +
>> +#define INSTR_RET_BOOL2(op) \
>> +__INSTR_RET2(op, , int, bool); \
>> +__INSTR_RET2(op, 64, long long, bool)
>> +
>> +INSTR_RET_BOOL2(sub_and_test);
>> +INSTR_RET_BOOL2(add_negative);
>> +
>> +#undef __INSTR_RET2
>> +#undef INSTR_RET2
>> +#undef INSTR_RET_BOOL2
>
> Are just utterly disgusting that turn perfectly readable code into an unreadable,
> unmaintainable mess.
>
> You need to find some better, cleaner solution please, or convince me that no such
> solution is possible. NAK for the time being.

Well, I can just write all functions as is. Does it better confirm to
kernel style? I've just looked at the x86 atomic.h and it uses macros
for similar purpose (ATOMIC_OP/ATOMIC_FETCH_OP), so I thought that
must be idiomatic kernel style...

\
 
 \ /
  Last update: 2017-03-24 08:16    [W:0.111 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site