lkml.org 
[lkml]   [2017]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 06/18] x86/kasan/64: Teach KASAN about the cpu_entry_area
On Thu, Nov 23, 2017 at 2:08 AM, Andrey Ryabinin
<aryabinin@virtuozzo.com> wrote:
>
>
> On 11/22/2017 06:22 PM, Andy Lutomirski wrote:
>> On Wed, Nov 22, 2017 at 1:05 AM, Andrey Ryabinin
>> <aryabinin@virtuozzo.com> wrote:
>>>
>>>
>>> On 11/22/2017 07:44 AM, Andy Lutomirski wrote:
>>>> The cpu_entry_area will contain stacks. Make sure that KASAN has
>>>> appropriate shadow mappings for them.
>>>>
>>>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>>>> Cc: Alexander Potapenko <glider@google.com>
>>>> Cc: Dmitry Vyukov <dvyukov@google.com>
>>>> Cc: kasan-dev@googlegroups.com
>>>> Signed-off-by: Andy Lutomirski <luto@kernel.org>
>>>> ---
>>>> arch/x86/mm/kasan_init_64.c | 9 ++++++++-
>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
>>>> index 99dfed6dfef8..43d376687315 100644
>>>> --- a/arch/x86/mm/kasan_init_64.c
>>>> +++ b/arch/x86/mm/kasan_init_64.c
>>>> @@ -330,7 +330,14 @@ void __init kasan_init(void)
>>>> early_pfn_to_nid(__pa(_stext)));
>>>>
>>>> kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
>>>> - (void *)KASAN_SHADOW_END);
>>>> + kasan_mem_to_shadow((void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM))));
>>>> +
>>>> + kasan_populate_shadow((unsigned long)kasan_mem_to_shadow((void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM))),
>>>> + (unsigned long)kasan_mem_to_shadow((void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_TOP) + PAGE_SIZE)),
>>>
>>> What's '+ PAGE_SIZE' for?
>>>
>>
>> __fix_to_virt(..._TOP) returns the address of the *bottom* of the last
>> cpu_entry_area page. +PAGE_SIZE returns one past the end of the
>> region, which I assume is the correct thing to pass.
>>
>
> Right.
>
> Perhaps, it would be better to use variables, just avoid such awfully long lines, I mean like this:
> fixmap_shadow_start = (void *)__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM);
> fixmap_shadow_start = kasan_mem_to_shadow(fixmap_shadow_start);
>
> fixmap_shadow_end = (void *)__fix_to_virt(FIX_CPU_ENTRY_AREA_TOP) + PAGE_SIZE;
> fixmap_shadow_end = kasan_mem_to_shadow(fixmap_shadow_end);
>
> kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
> fixmap_shadow_start);
>
> kasan_populate_shadow((unsigned long)fixmap_shadow_start,
> (unsigned long)fixmap_shadow_end,
> 0);
>
> I'm also thinking that we should change kasan_populate_shadow() to take void* instead of 'unsigned long'
> to avoid those casts.

I did something similar, but I left the kasan_mem_to_shadow in for
consistency with the rest.

I think the real way to clean this up is like this:

struct kasan_range {
void *start;
void *end;
};

struct kasan_range ranges[] = {
{ range 1 },
{ range 2 },
};

sort(range, ARRAY_SIZE(ranges), sizeof(ranges[0]), ...);

last_end = PAGE_OFFSET; /* or whatever is right */

for (i = 0; i < ARRAY_SIZE(ranges); i++) [
WARN_ON(ranges[i].start < last_end);

if (ranges[i].start > last_end)
kasan_populate_zero_shadow(...);
kasan_populate_shadow(...);
last_end = ranges[i].end;
}

kasan_populate_zero_shadow(last_end, the real end);

Then the code doesn't need to duplicate each boundary or to hardcode
the order in which things appear.

\
 
 \ /
  Last update: 2017-11-23 16:23    [W:0.108 / U:0.528 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site