lkml.org 
[lkml]   [2016]   [Aug]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] x86/mm: Add barriers and document switch_mm()-vs-flush synchronization follow-up
From
Date
Rafael Aquini <aquini@redhat.com> wrote:

> On Tue, Aug 02, 2016 at 03:27:06PM -0700, Nadav Amit wrote:
>> Rafael Aquini <aquini@redhat.com> wrote:
>>
>>> While backporting 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
>>> we stumbled across a possibly missing barrier at flush_tlb_page().
>>
>> I too noticed it and submitted a similar patch that never got a response [1].
>
> As far as I understood Andy's rationale for the original patch you need
> a full memory barrier there in flush_tlb_page to get that cache-eviction
> race sorted out.

I am completely ok with your fix (except for the missing barrier in
set_tlb_ubc_flush_pending() ). However, I think mine should suffice. As far as
I saw, an atomic operation preceded every invocation of flush_tlb_page(). I
was afraid someone would send me to measure the patch performance impact so I
looked for one with the least impact.

See Intel SDM 8.2.2 "Memory Ordering in P6 and More Recent Processor Families"
for the reasoning behind smp_mb__after_atomic() . The result of an atomic
operation followed by smp_mb__after_atomic should be identical to smp_mb().

Regards,
Nadav




\
 
 \ /
  Last update: 2016-08-03 03:21    [W:0.038 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site