lkml.org 
[lkml]   [2017]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -v2 1/4] mm: Rework {set,clear,mm}_tlb_flush_pending()
On Wed, Aug 02, 2017 at 01:38:38PM +0200, Peter Zijlstra wrote:
> /*
> + * The only time this value is relevant is when there are indeed pages
> + * to flush. And we'll only flush pages after changing them, which
> + * requires the PTL.
> + *
> + * So the ordering here is:
> + *
> + * mm->tlb_flush_pending = true;
> + * spin_lock(&ptl);
> + * ...
> + * set_pte_at();
> + * spin_unlock(&ptl);
> + *
> + * spin_lock(&ptl)
> + * mm_tlb_flush_pending();
> + * ....

Crud, so while I was rebasing Nadav's patches I realized that this does
not in fact work for PPC and split PTL. Because the PPC lwsync relies on
the address dependency to actual produce the ordering.

Also, since Nadav switched to atomic_inc/atomic_dec, I'll send a patch
to add smp_mb__after_atomic(), and

> + * spin_unlock(&ptl);
> + *
> + * flush_tlb_range();
> + * mm->tlb_flush_pending = false;
> + *
> + * So the =true store is constrained by the PTL unlock, and the =false
> + * store is constrained by the TLB invalidate.
> */
> }
> /* Clearing is done after a TLB flush, which also provides a barrier. */
> static inline void clear_tlb_flush_pending(struct mm_struct *mm)
> {
> + /* see set_tlb_flush_pending */

smp_mb__before_atomic() here. That also avoids the whole reliance on the
tlb_flush nonsense.

It will overstuff on barriers on some platforms though :/

> mm->tlb_flush_pending = false;
> }

\
 
 \ /
  Last update: 2017-08-11 11:48    [W:1.268 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site