lkml.org 
[lkml]   [2020]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v2 02/28] x86/asm: Replace __force_order with memory clobber
    On Thu, Sep 03, 2020 at 01:30:27PM -0700, Sami Tolvanen wrote:
    > From: Arvind Sankar <nivedita@alum.mit.edu>
    >
    > The CRn accessor functions use __force_order as a dummy operand to
    > prevent the compiler from reordering CRn reads/writes with respect to
    > each other.
    >
    > The fact that the asm is volatile should be enough to prevent this:
    > volatile asm statements should be executed in program order. However GCC
    > 4.9.x and 5.x have a bug that might result in reordering. This was fixed
    > in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
    > may reorder volatile asm statements with respect to each other.
    >
    > There are some issues with __force_order as implemented:
    > - It is used only as an input operand for the write functions, and hence
    > doesn't do anything additional to prevent reordering writes.
    > - It allows memory accesses to be cached/reordered across write
    > functions, but CRn writes affect the semantics of memory accesses, so
    > this could be dangerous.
    > - __force_order is not actually defined in the kernel proper, but the
    > LLVM toolchain can in some cases require a definition: LLVM (as well
    > as GCC 4.9) requires it for PIE code, which is why the compressed
    > kernel has a definition, but also the clang integrated assembler may
    > consider the address of __force_order to be significant, resulting in
    > a reference that requires a definition.
    >
    > Fix this by:
    > - Using a memory clobber for the write functions to additionally prevent
    > caching/reordering memory accesses across CRn writes.
    > - Using a dummy input operand with an arbitrary constant address for the
    > read functions, instead of a global variable. This will prevent reads
    > from being reordered across writes, while allowing memory loads to be
    > cached/reordered across CRn reads, which should be safe.
    >
    > Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

    In the primary thread for this patch I sent a Reviewed tag, but for good
    measure, here it is again:

    Reviewed-by: Kees Cook <keescook@chromium.org>

    --
    Kees Cook

    \
     
     \ /
      Last update: 2020-09-03 23:46    [W:4.445 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site