Messages in this thread Patch in this message |  | | Date | Sat, 17 Dec 2022 18:55:30 -0000 | From | "tip-bot2 for Sean Christopherson" <> | Subject | [tip: x86/mm] x86/mm: Recompute physical address for every page of per-CPU CEA mapping |
| |
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: 80d72a8f76e8f3f0b5a70b8c7022578e17bde8e7 Gitweb: https://git.kernel.org/tip/80d72a8f76e8f3f0b5a70b8c7022578e17bde8e7 Author: Sean Christopherson <seanjc@google.com> AuthorDate: Thu, 10 Nov 2022 20:35:00 Committer: Dave Hansen <dave.hansen@linux.intel.com> CommitterDate: Thu, 15 Dec 2022 10:37:28 -08:00
x86/mm: Recompute physical address for every page of per-CPU CEA mapping
Recompute the physical address for each per-CPU page in the CPU entry area, a recent commit inadvertantly modified cea_map_percpu_pages() such that every PTE is mapped to the physical address of the first page.
Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-2-seanjc@google.com --- arch/x86/mm/cpu_entry_area.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index dff9001..d831aae 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -97,7 +97,7 @@ cea_map_percpu_pages(void *cea_vaddr, void *ptr, int pages, pgprot_t prot) early_pfn_to_nid(PFN_DOWN(pa))); for ( ; pages; pages--, cea_vaddr+= PAGE_SIZE, ptr += PAGE_SIZE) - cea_set_pte(cea_vaddr, pa, prot); + cea_set_pte(cea_vaddr, per_cpu_ptr_to_phys(ptr), prot); } static void __init percpu_setup_debug_store(unsigned int cpu)
|  |