Messages in this thread | | | From | Ard Biesheuvel <> | Date | Wed, 26 Dec 2018 14:49:58 +0100 | Subject | Re: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region |
| |
On Tue, 25 Dec 2018 at 03:30, Yueyi Li <liyueyi@live.com> wrote: > > Hi Ard, > > > On 2018/12/24 17:45, Ard Biesheuvel wrote: > > Does the following change fix your issue as well? > > > > index 9b432d9fcada..9dcf0ff75a11 100644 > > --- a/arch/arm64/mm/init.c > > +++ b/arch/arm64/mm/init.c > > @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void) > > * memory spans, randomize the linear region as well. > > */ > > if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { > > - range = range / ARM64_MEMSTART_ALIGN + 1; > > + range /= ARM64_MEMSTART_ALIGN; > > memstart_addr -= ARM64_MEMSTART_ALIGN * > > ((range * memstart_offset_seed) >> 16); > > } > > Yes, it can fix this also. I just think modify the first *range* > calculation would be easier to grasp, what do you think? >
I don't think there is a difference, to be honest, but I will leave it up to the maintainers to decide which approach they prefer.
| |