lkml.org 
[lkml]   [2017]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86, kasan: add KASAN checks to atomic operations
Hi,

On Wed, Mar 08, 2017 at 02:42:10PM +0100, Dmitry Vyukov wrote:
> I think if we scope compiler atomic builtins to KASAN/KTSAN/KMSAN (and
> consequently x86/arm64) initially, it becomes more realistic. For the
> tools we don't care about absolute efficiency and this gets rid of
> Will's points (2), (4) and (6) here https://lwn.net/Articles/691295/.
> Re (3) I think rmb/wmb can be reasonably replaced with
> atomic_thread_fence(acquire/release). Re (5) situation with
> correctness becomes better very quickly as more people use them in
> user-space. Since KASAN is not intended to be used in production (or
> at least such build is expected to crash), we can afford to shake out
> any remaining correctness issues in such build. (1) I don't fully
> understand, what exactly is the problem with seq_cst?

I'll have to leave it to Will to have the final word on these; I'm
certainly not familiar enough with the C11 memory model to comment on
(1).

However, w.r.t. (3), I don't think we can substitute rmb() and wmb()
with atomic_thread_fence_acquire() and atomic_thread_fence_release()
respectively on arm64.

The former use barriers with full system scope, whereas the latter may
be limited to the inner shareable domain. While I'm not sure of the
precise intended semantics of wmb() and rmb(), I believe this
substitution would break some cases (e.g. communicating with a
non-coherent master).

Note that regardless, we'd have to special-case __iowmb() to use a full
system barrier.

Also, w.r.t. (5), modulo the lack of atomic instrumentation, people use
KASAN today, with compilers that are known to have bugs in their atomics
(e.g. GCC bug 69875). Thus, we cannot rely on the compiler's
implementation of atomics without introducing a functional regression.

> i'Ve sketched a patch that does it, and did some testing with/without
> KASAN on x86_64.
>
> In short, it adds include/linux/atomic_compiler.h which is included
> from include/linux/atomic.h when CONFIG_COMPILER_ATOMIC is defined;
> and <asm/atomic.h> is not included when CONFIG_COMPILER_ATOMIC is
> defined.
> For bitops it is similar except that only parts of asm/bitops.h are
> selectively disabled when CONFIG_COMPILER_ATOMIC, because it also
> defines other stuff.
> asm/barriers.h is left intact for now. We don't need it for KASAN. But
> for KTSAN we can do similar thing -- selectively disable some of the
> barriers in asm/barriers.h (e.g. leaving dma_rmb/wmb per arch).
>
> Such change would allow us to support atomic ops for multiple arches
> for all of KASAN/KTSAN/KMSAN.
>
> Thoughts?

As in my other reply, I'd prefer that we wrapped the (arch-specific)
atomic implementations such that we can instrument them explicitly in a
core header. That means that the implementation and semantics of the
atomics don't change at all.

Note that we could initially do this just for x86 and arm64), e.g. by
having those explicitly include an <asm-generic/atomic-instrumented.h>
at the end of their <asm/atomic.h>.

For architectures which can use the compiler's atomics, we can allow
them to do so, skipping the redundant explicit instrumentation.

Other than being potentially slower (which we've established we don't
care too much about above), is there a problem with that approach?

Thanks,
Mark.

\
 
 \ /
  Last update: 2017-03-08 16:30    [W:0.170 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site