lkml.org 
[lkml]   [2019]   [Feb]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v4 3/3] powerpc/32: Add KASAN support
From
Date
Hi Daniel,

Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
> Hi Christophe,
>
> I've been attempting to port this to 64-bit Book3e nohash (e6500),
> although I think I've ended up with an approach more similar to Aneesh's
> much earlier (2015) series for book3s.
>
> Part of this is just due to the changes between 32 and 64 bits - we need
> to hack around the discontiguous mappings - but one thing that I'm
> particularly puzzled by is what the kasan_early_init is supposed to do.

It should be a problem as my patch uses a 'for_each_memblock(memory,
reg)' loop.

>
>> +void __init kasan_early_init(void)
>> +{
>> + unsigned long addr = KASAN_SHADOW_START;
>> + unsigned long end = KASAN_SHADOW_END;
>> + unsigned long next;
>> + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>> + int i;
>> + phys_addr_t pa = __pa(kasan_early_shadow_page);
>> +
>> + BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>> +
>> + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>> + panic("KASAN not supported with Hash MMU\n");
>> +
>> + for (i = 0; i < PTRS_PER_PTE; i++)
>> + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>> + kasan_early_shadow_pte + i,
>> + pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>> +
>> + do {
>> + next = pgd_addr_end(addr, end);
>> + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>> + } while (pmd++, addr = next, addr != end);
>> +}
>
> As far as I can tell it's mapping the early shadow page, read-only, over
> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
> shadow PTE array from the generic code.
>
> I haven't been able to find an answer to why this is in the docs, so I
> was wondering if you or anyone else could explain the early part of
> kasan init a bit better.

See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
explanation of the shadow.

When shadow is 0, it means the memory area is entirely accessible.

It is necessary to setup a shadow area as soon as possible because all
data accesses check the shadow area, from the begining (except for a few
files where sanitizing has been disabled in Makefiles).

Until the real shadow area is set, all access are granted thanks to the
zero shadow area beeing for of zeros.

I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.

>
> At the moment, I don't do any early init, and like Aneesh's series for
> book3s, I end up needing a special flag to disable kasan until after
> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
> fire, although my missing tests are a superset of his. I suspect the
> early init has something to do with these...?

I think you should really focus on establishing a zero shadow area as
early as possible instead of trying to ack the core parts of KASAN.

>
> (I'm happy to collate answers into a patch to the docs, btw!)

We can also have the discussion going via
https://github.com/linuxppc/issues/issues/106

>
> In the long term I hope to revive Aneesh's and Balbir's series for hash
> and radix as well.

Great.

Christophe

>
> Regards,
> Daniel
>
>> +
>> +static void __init kasan_init_region(struct memblock_region *reg)
>> +{
>> + void *start = __va(reg->base);
>> + void *end = __va(reg->base + reg->size);
>> + unsigned long k_start, k_end, k_cur, k_next;
>> + pmd_t *pmd;
>> +
>> + if (start >= end)
>> + return;
>> +
>> + k_start = (unsigned long)kasan_mem_to_shadow(start);
>> + k_end = (unsigned long)kasan_mem_to_shadow(end);
>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>> +
>> + for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>> + k_next = pgd_addr_end(k_cur, k_end);
>> + if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>> + pte_t *new = pte_alloc_one_kernel(&init_mm);
>> +
>> + if (!new)
>> + panic("kasan: pte_alloc_one_kernel() failed");
>> + memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>> + pmd_populate_kernel(&init_mm, pmd, new);
>> + }
>> + };
>> +
>> + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>> + void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>> + pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>> +
>> + if (!va)
>> + panic("kasan: memblock_alloc() failed");
>> + pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>> + pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>> + }
>> + flush_tlb_kernel_range(k_start, k_end);
>> +}
>> +
>> +void __init kasan_init(void)
>> +{
>> + struct memblock_region *reg;
>> +
>> + for_each_memblock(memory, reg)
>> + kasan_init_region(reg);
>> +
>> + kasan_init_tags();
>> +
>> + /* At this point kasan is fully initialized. Enable error messages */
>> + init_task.kasan_depth = 0;
>> + pr_info("KASAN init done\n");
>> +}
>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>> index 33cc6f676fa6..ae7db88b72d6 100644
>> --- a/arch/powerpc/mm/mem.c
>> +++ b/arch/powerpc/mm/mem.c
>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>> pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n",
>> PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>> #endif /* CONFIG_HIGHMEM */
>> +#ifdef CONFIG_KASAN
>> + pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n",
>> + KASAN_SHADOW_START, KASAN_SHADOW_END);
>> +#endif
>> #ifdef CONFIG_NOT_COHERENT_CACHE
>> pr_info(" * 0x%08lx..0x%08lx : consistent mem\n",
>> IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>> --
>> 2.13.3

\
 
 \ /
  Last update: 2019-02-08 18:18    [W:0.125 / U:0.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site