lkml.org 
[lkml]   [2018]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH AUTOSEL 4.19 51/73] ARM: 8814/1: mm: improve/fix ARM v7_dma_inv_range() unaligned address handling
    Date
    From: Chris Cole <chris@sageembedded.com>

    [ Upstream commit a1208f6a822ac29933e772ef1f637c5d67838da9 ]

    This patch addresses possible memory corruption when
    v7_dma_inv_range(start_address, end_address) address parameters are not
    aligned to whole cache lines. This function issues "invalidate" cache
    management operations to all cache lines from start_address (inclusive)
    to end_address (exclusive). When start_address and/or end_address are
    not aligned, the start and/or end cache lines are first issued "clean &
    invalidate" operation. The assumption is this is done to ensure that any
    dirty data addresses outside the address range (but part of the first or
    last cache lines) are cleaned/flushed so that data is not lost, which
    could happen if just an invalidate is issued.

    The problem is that these first/last partial cache lines are issued
    "clean & invalidate" and then "invalidate". This second "invalidate" is
    not required and worse can cause "lost" writes to addresses outside the
    address range but part of the cache line. If another component writes to
    its part of the cache line between the "clean & invalidate" and
    "invalidate" operations, the write can get lost. This fix is to remove
    the extra "invalidate" operation when unaligned addressed are used.

    A kernel module is available that has a stress test to reproduce the
    issue and a unit test of the updated v7_dma_inv_range(). It can be
    downloaded from
    http://ftp.sageembedded.com/outgoing/linux/cache-test-20181107.tgz.

    v7_dma_inv_range() is call by dmac_[un]map_area(addr, len, direction)
    when the direction is DMA_FROM_DEVICE. One can (I believe) successfully
    argue that DMA from a device to main memory should use buffers aligned
    to cache line size, because the "clean & invalidate" might overwrite
    data that the device just wrote using DMA. But if a driver does use
    unaligned buffers, at least this fix will prevent memory corruption
    outside the buffer.

    Signed-off-by: Chris Cole <chris@sageembedded.com>
    Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    arch/arm/mm/cache-v7.S | 8 +++++---
    1 file changed, 5 insertions(+), 3 deletions(-)

    diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
    index 215df435bfb9..2149b47a0c5a 100644
    --- a/arch/arm/mm/cache-v7.S
    +++ b/arch/arm/mm/cache-v7.S
    @@ -360,14 +360,16 @@ v7_dma_inv_range:
    ALT_UP(W(nop))
    #endif
    mcrne p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
    + addne r0, r0, r2

    tst r1, r3
    bic r1, r1, r3
    mcrne p15, 0, r1, c7, c14, 1 @ clean & invalidate D / U line
    -1:
    - mcr p15, 0, r0, c7, c6, 1 @ invalidate D / U line
    - add r0, r0, r2
    cmp r0, r1
    +1:
    + mcrlo p15, 0, r0, c7, c6, 1 @ invalidate D / U line
    + addlo r0, r0, r2
    + cmplo r0, r1
    blo 1b
    dsb st
    ret lr
    --
    2.19.1
    \
     
     \ /
      Last update: 2018-12-13 05:48    [W:5.600 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site