lkml.org 
[lkml]   [2016]   [Jan]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.3 017/157] x86/mm: Improve switch_mm() barrier comments
    Date
    4.3-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Andy Lutomirski <luto@kernel.org>

    commit 4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b upstream.

    My previous comments were still a bit confusing and there was a
    typo. Fix it up.

    Reported-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Andy Lutomirski <luto@kernel.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Fixes: 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
    Link: http://lkml.kernel.org/r/0a0b43cdcdd241c5faaaecfbcc91a155ddedc9a1.1452631609.git.luto@kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/include/asm/mmu_context.h | 15 ++++++++-------
    1 file changed, 8 insertions(+), 7 deletions(-)

    --- a/arch/x86/include/asm/mmu_context.h
    +++ b/arch/x86/include/asm/mmu_context.h
    @@ -132,14 +132,16 @@ static inline void switch_mm(struct mm_s
    * be sent, and CPU 0's TLB will contain a stale entry.)
    *
    * The bad outcome can occur if either CPU's load is
    - * reordered before that CPU's store, so both CPUs much
    + * reordered before that CPU's store, so both CPUs must
    * execute full barriers to prevent this from happening.
    *
    * Thus, switch_mm needs a full barrier between the
    * store to mm_cpumask and any operation that could load
    - * from next->pgd. This barrier synchronizes with
    - * remote TLB flushers. Fortunately, load_cr3 is
    - * serializing and thus acts as a full barrier.
    + * from next->pgd. TLB fills are special and can happen
    + * due to instruction fetches or for no reason at all,
    + * and neither LOCK nor MFENCE orders them.
    + * Fortunately, load_cr3() is serializing and gives the
    + * ordering guarantee we need.
    *
    */
    load_cr3(next->pgd);
    @@ -188,9 +190,8 @@ static inline void switch_mm(struct mm_s
    * tlb flush IPI delivery. We must reload CR3
    * to make sure to use no freed page tables.
    *
    - * As above, this is a barrier that forces
    - * TLB repopulation to be ordered after the
    - * store to mm_cpumask.
    + * As above, load_cr3() is serializing and orders TLB
    + * fills with respect to the mm_cpumask write.
    */
    load_cr3(next->pgd);
    trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);

    \
     
     \ /
      Last update: 2016-01-27 19:41    [W:4.101 / U:0.916 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site