lkml.org 
[lkml]   [2023]   [Mar]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86/mm: Do not shuffle CPU entry areas without KASLR
On Fri, Mar 03, 2023 at 01:24:53PM -0800, Dave Hansen <dave.hansen@intel.com> wrote:
> Should this be kaslr_memory_enabled() or kaslr_enabled()?

Originally, I had chosen kaslr_enabled(), seeing the PGD requirement of
KASAN (whole randomization area CPU_ENTRY_AREA_MAP_SIZE would fit in PGD
afterall).

> The delta seems to be CONFIG_KASAN, and the cpu entry area randomization
> works just fine with KASAN after some recent fixes.

But then I found KASAN code trying to be smart and having the fixups,
hence I chickened out to kaslr_memory_enabled().

> I _think_ that makes cpu entry area randomization more like module
> randomization which would point toward kaslr_enabled().

<del>I understood the only difference between kaslr_enabled and
kaslr_memory_enabled is the PGD alignment of the respective regions.
(Although, I don't see where KASAN breaks with unaligned ranges except
for better efficiency of page tables.)</del>

I've just found your [1], wondering the similar.


That being said, I will send v2 with just kaslr_enabled() guard and
updated commit message to beware of KASAN fixups (when backporting).

Thanks,
Michal

[1] https://lore.kernel.org/r/299fbb80-e3ab-3b7c-3491-e85cac107930@intel.com/

\
 
 \ /
  Last update: 2023-03-27 00:43    [W:0.040 / U:0.536 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site