Messages in this thread | | | From | Andy Lutomirski <> | Date | Wed, 26 Jul 2017 06:52:06 -0700 | Subject | Re: [PATCH v6] x86/mm: Improve TLB flush documentation |
| |
On Tue, Jul 25, 2017 at 7:44 AM, Peter Zijlstra <peterz@infradead.org> wrote: > On Tue, Jul 25, 2017 at 07:10:44AM -0700, Andy Lutomirski wrote: >> Improve comments as requested by PeterZ and also add some >> documentation at the top of the file. >> >> This adds and removes some smp_mb__after_atomic() calls to make the >> code correct even in the absence of x86's extra-strong atomics. > > The main point being that this better documents on which specific > ordering we rely.
Indeed.
>> /* >> + * Start remote flushes and then read tlb_gen. As >> + * above, the barrier synchronizes with >> + * inc_mm_tlb_gen() like this: >> + * >> + * switch_mm_irqs_off(): flush request: >> + * cpumask_set_cpu(...); inc_mm_tlb_gen(); >> + * MB MB >> + * atomic64_read(.tlb_gen); flush_tlb_others(mm_cpumask()); >> */ >> cpumask_set_cpu(cpu, mm_cpumask(next)); >> + smp_mb__after_atomic(); >> next_tlb_gen = atomic64_read(&next->context.tlb_gen); >> >> choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); > > Arguably one could make a helper function of those few lines, not sure > it makes sense, but this duplication seems wasteful. > > So we either see the increment or the CPU set, but can not have neither. > > Should not arch_tlbbatch_add_mm() also have this same comment? It too > seems to increment and then read the mask.
Hmm. There's already this comment in inc_mm_tlb_gen():
/* * Bump the generation count. This also serves as a full barrier * that synchronizes with switch_mm(): callers are required to order * their read of mm_cpumask after their writes to the paging * structures. */
is that not adequate?
FWIW, I have followup patches in the works to further de-deduplicate a bunch of this code. I wanted to get the main bits all landed first, though.
| |