Messages in this thread | | | Date | Tue, 10 Dec 2013 09:25:39 -0500 | From | Rik van Riel <> | Subject | Re: [PATCH 11/18] mm: fix TLB flush race between migration, and change_protection_range |
| |
On 12/09/2013 02:09 AM, Mel Gorman wrote:
After reading the locking thread that Paul McKenney started, I wonder if I got the barriers wrong in these functions...
> +#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION) > +/* > + * Memory barriers to keep this state in sync are graciously provided by > + * the page table locks, outside of which no page table modifications happen. > + * The barriers below prevent the compiler from re-ordering the instructions > + * around the memory barriers that are already present in the code. > + */ > +static inline bool tlb_flush_pending(struct mm_struct *mm) > +{ > + barrier();
Should this be smp_mb__after_unlock_lock(); ?
> + return mm->tlb_flush_pending; > +} > +static inline void set_tlb_flush_pending(struct mm_struct *mm) > +{ > + mm->tlb_flush_pending = true; > + barrier(); > +} > +/* Clearing is done after a TLB flush, which also provides a barrier. */ > +static inline void clear_tlb_flush_pending(struct mm_struct *mm) > +{ > + barrier(); > + mm->tlb_flush_pending = false; > +}
And these smp_mb__before_spinlock() ?
Paul? Peter?
-- All rights reversed
| |