lkml.org 
[lkml]   [2018]   [Nov]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] x86/mm/pat: Fix missing preemption disable for __native_flush_tlb()
From
Date
On 11/10/18 4:31 PM, Dan Williams wrote:
>> If it indeed can run late in boot or after boot, then it sure looks
>> buggy. Either the __flush_tlb_all() should be removed or it should
>> be replaced with flush_tlb_kernel_range(). It’s unclear to me why a
>> flush is needed at all, but if it’s needed, surely all CPUs need
>> flushing.
> Yeah, I don't think __flush_tlb_all() is needed at
> kernel_physical_mapping_init() time, and at
> kernel_physical_mapping_remove() time we do a full flush_tlb_all().

It doesn't look strictly necessary to me. I _think_ we're only ever
populating previously non-present entries, and those never need TLB
flushes. I didn't look too deeply, so I'd appreciate anyone else
double-checking me on this.

The __flush_tlb_all() actually appears to predate git and it was
originally entirely intended for early-boot-only. It probably lasted
this long because it looks really important. :)

It was even next to where we set MMU features in CR4, which is *really*
early in boot:

> + asm volatile("movq %%cr4,%0" : "=r" (mmu_cr4_features));
> + __flush_tlb_all();

I also totally agree with Andy that if it were needed on the local CPU,
this code would be buggy because it doesn't initiate any *remote* TLB
flushes.

So, let's remove it, but also add some comments about not being allowed
to *change* page table entries, only populate them. We could even add
some warnings to keep this enforced.

\
 
 \ /
  Last update: 2018-11-12 20:08    [W:0.055 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site