lkml.org 
[lkml]   [2017]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions
On Fri, Apr 7, 2017 at 10:41 AM, Kani, Toshimitsu <toshi.kani@hpe.com> wrote:
> On Thu, 2017-04-06 at 13:59 -0700, Dan Williams wrote:
>> Before we rework the "pmem api" to stop abusing __copy_user_nocache()
>> for memcpy_to_pmem() we need to fix cases where we may strand dirty
>> data in the cpu cache. The problem occurs when copy_from_iter_pmem()
>> is used for arbitrary data transfers from userspace. There is no
>> guarantee that these transfers, performed by dax_iomap_actor(), will
>> have aligned destinations or aligned transfer lengths. Backstop the
>> usage __copy_user_nocache() with explicit cache management in these
>> unaligned cases.
>>
>> Yes, copy_from_iter_pmem() is now too big for an inline, but
>> addressing that is saved for a later patch that moves the entirety of
>> the "pmem api" into the pmem driver directly.
>
> The change looks good to me. Should we also avoid cache flushing in
> the case of size=4B & dest aligned by 4B?

Yes, since you fixed the 4B aligned case we should skip cache flushing
in that case. I'll send a v2.

\
 
 \ /
  Last update: 2017-04-08 01:52    [W:0.065 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site