lkml.org 
[lkml]   [2018]   [Dec]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region
Date
Hi Ard,


On 2018/12/24 17:45, Ard Biesheuvel wrote:
> Does the following change fix your issue as well?
>
> index 9b432d9fcada..9dcf0ff75a11 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
> * memory spans, randomize the linear region as well.
> */
> if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> - range = range / ARM64_MEMSTART_ALIGN + 1;
> + range /= ARM64_MEMSTART_ALIGN;
> memstart_addr -= ARM64_MEMSTART_ALIGN *
> ((range * memstart_offset_seed) >> 16);
> }

Yes, it can fix this also. I just think modify the first *range*
calculation would be easier to
grasp, what do you think?



Thanks,
Yueyi
\
 
 \ /
  Last update: 2018-12-25 03:31    [W:0.108 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site