lkml.org 
[lkml]   [2017]   [May]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    SubjectRe: [PATCH 4.4 26/28] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions
    From
    Date
    On Tue, 2017-04-25 at 16:08 +0100, Greg Kroah-Hartman wrote:
    > 4.4-stable review patch. If anyone has any objections, please let me know.
    >
    > ------------------
    >
    > From: Dan Williams <dan.j.williams@intel.com>
    >
    > commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream.
    [...]
    > + if (iter_is_iovec(i)) {
    > + unsigned long flushed, dest = (unsigned long) addr;
    > +
    > + if (bytes < 8) {
    > + if (!IS_ALIGNED(dest, 4) || (bytes != 4))
    > + __arch_wb_cache_pmem(addr, 1);
    [...]

    What if the write crosses a cache line boundary? I think you need the
    following fix-up (untested, I don't have this kind of hardware).

    Ben.

    ---
    From: Ben Hutchings <ben.hutchings@codethink.co.uk>
    Subject: x86, pmem: Fix cache flushing for iovec write < 8 bytes

    Commit 11e63f6d920d added cache flushing for unaligned writes from an
    iovec, covering the first and last cache line of a >= 8 byte write and
    the first cache line of a < 8 byte write. But an unaligned write of
    2-7 bytes can still cover two cache lines, so make sure we flush both
    in that case.

    Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
    Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
    ---
    arch/x86/include/asm/pmem.h | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
    index d5a22bac9988..0ff8fe71b255 100644
    --- a/arch/x86/include/asm/pmem.h
    +++ b/arch/x86/include/asm/pmem.h
    @@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,

    if (bytes < 8) {
    if (!IS_ALIGNED(dest, 4) || (bytes != 4))
    - arch_wb_cache_pmem(addr, 1);
    + arch_wb_cache_pmem(addr, bytes);
    } else {
    if (!IS_ALIGNED(dest, 8)) {
    dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
    --
    Ben Hutchings
    Software Developer, Codethink Ltd.


    \
     
     \ /
      Last update: 2017-05-10 21:18    [W:3.892 / U:0.000 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site