Messages in this thread Patch in this message | | | Date | Tue, 19 Dec 2017 19:08:21 +0100 (CET) | From | Thomas Gleixner <> | Subject | Re: [patch V163 27/51] x86/mm/pti: Populate user PGD |
| |
On Tue, 19 Dec 2017, Ingo Molnar wrote:
> * Peter Zijlstra <peterz@infradead.org> wrote: > > > On Mon, Dec 18, 2017 at 12:45:13PM -0800, Dave Hansen wrote: > > > On 12/18/2017 12:41 PM, Peter Zijlstra wrote: > > > >> I also don't think the user_shared area of the fixmap can get *that* > > > >> big. Does anybody know offhand what the theoretical limits are there? > > > > Problem there is the nr_cpus term I think, we currently have up to 8k > > > > CPUs, but I can see that getting bigger in the future. > > > > > > It only matters if we go over 512GB, though. Is the per-cpu part of the > > > fixmap ever more than 512GB/8k=64MB? > > > > Unlikely, I think the LDT (@ 32 pages / 128K) and the DS (@ 2*4 pages / > > 32K) are the largest entries in there. > > Note that with the latest state of things the LDT is not in the fixmap anymore, > it's mapped separately, via Andy's following patch: > > e86aaee3f2d9: ("x86/pti: Put the LDT in its own PGD if PTI is on") > > We have the IDT, the per-CPU entry area and the Debug Store (on Intel CPUs) mapped > in the fixmap area, in addition to the usual fixmap entries that are a handful of > pages. (That's on 64-bit - on 32-bit we have a pretty large kmap area.) > > The biggest contribution to the size of the fixmap area is struct cpu_entry_area > (FIX_CPU_ENTRY_AREA_BOTTOM..FIX_CPU_ENTRY_AREA_TOP), which is ~180k, i.e. 44 > pages. > > Our current NR_CPUS limit is 8,192 CPUs, but even with 65,536 CPUs the fixmap area > would still only be ~12 GB total - so we are far from running out of space.
We don't run out of space, but the 0-day robot triggered a nasty issue.
The fixmap bottom address, which contains the early_ioremap fixmap area, is:
vaddr_bt = FIXADDR_TOP - FIX_BTMAP_BEGIN * PAGE_SIZE
If that address is lower than:
vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE;
then cleanup_highmap() will happily 0 out the PMD entry for the PTE page of FIX_BTMAP. That entry was set up earlier in early_ioremap_init().
As a consequence the first call to __early_set_fixmap() which tries to install a PTE for early_ioremap() will crash and burn.
Below is a nasty hack which fixes the problem. Ideally we get all of this cpu_entry_stuff out of the fixmap. I'll look into that later, but for now the patch 'fixes' the issue.
Thanks,
tglx 8<------------- --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -209,6 +209,8 @@ extern pte_t *kmap_pte; #define kmap_prot PAGE_KERNEL extern pte_t *pkmap_page_table; +extern pmd_t *early_ioremap_page_table; + void __native_set_fixmap(enum fixed_addresses idx, pte_t pte); void native_set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t flags); --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -393,6 +393,15 @@ void __init cleanup_highmap(void) for (; vaddr + PMD_SIZE - 1 < vaddr_end; pmd++, vaddr += PMD_SIZE) { if (pmd_none(*pmd)) continue; + /* + * Careful here. vaddr_end might be past the pmd which is + * used by the early ioremap stuff. Don't clean that out as + * it's already set up. + */ + if (__phys_addr_nodebug((unsigned long) pmd) == + __phys_addr_nodebug((unsigned long) early_ioremap_page_table)) + continue; + if (vaddr < (unsigned long) _text || vaddr > end) set_pmd(pmd, __pmd(0)); } --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -27,6 +27,8 @@ #include "physaddr.h" +pmd_t __initdata *early_ioremap_page_table; + /* * Fix up the linear direct mapping of the kernel to avoid cache attribute * conflicts. @@ -709,7 +711,7 @@ void __init early_ioremap_init(void) pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)); memset(bm_pte, 0, sizeof(bm_pte)); pmd_populate_kernel(&init_mm, pmd, bm_pte); - + early_ioremap_page_table = pmd; /* * The boot-ioremap range spans multiple pmds, for which * we are not prepared:
| |