lkml.org 
[lkml]   [2022]   [Jul]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v4 14/45] mm: kmsan: maintain KMSAN metadata for page operations
    On Fri, 1 Jul 2022 at 16:23, Alexander Potapenko <glider@google.com> wrote:
    >
    > Insert KMSAN hooks that make the necessary bookkeeping changes:
    > - poison page shadow and origins in alloc_pages()/free_page();
    > - clear page shadow and origins in clear_page(), copy_user_highpage();
    > - copy page metadata in copy_highpage(), wp_page_copy();
    > - handle vmap()/vunmap()/iounmap();
    >
    > Signed-off-by: Alexander Potapenko <glider@google.com>
    > ---
    > v2:
    > -- move page metadata hooks implementation here
    > -- remove call to kmsan_memblock_free_pages()
    >
    > v3:
    > -- use PAGE_SHIFT in kmsan_ioremap_page_range()
    >
    > v4:
    > -- change sizeof(type) to sizeof(*ptr)
    > -- replace occurrences of |var| with @var
    > -- swap mm: and kmsan: in the subject
    > -- drop __no_sanitize_memory from clear_page()
    >
    > Link: https://linux-review.googlesource.com/id/I6d4f53a0e7eab46fa29f0348f3095d9f2e326850
    > ---
    > arch/x86/include/asm/page_64.h | 12 ++++
    > arch/x86/mm/ioremap.c | 3 +
    > include/linux/highmem.h | 3 +
    > include/linux/kmsan.h | 123 +++++++++++++++++++++++++++++++++
    > mm/internal.h | 6 ++
    > mm/kmsan/hooks.c | 87 +++++++++++++++++++++++
    > mm/kmsan/shadow.c | 114 ++++++++++++++++++++++++++++++
    > mm/memory.c | 2 +
    > mm/page_alloc.c | 11 +++
    > mm/vmalloc.c | 20 +++++-
    > 10 files changed, 379 insertions(+), 2 deletions(-)
    >
    > diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
    > index baa70451b8df5..227dd33eb4efb 100644
    > --- a/arch/x86/include/asm/page_64.h
    > +++ b/arch/x86/include/asm/page_64.h
    > @@ -45,14 +45,26 @@ void clear_page_orig(void *page);
    > void clear_page_rep(void *page);
    > void clear_page_erms(void *page);
    >
    > +/* This is an assembly header, avoid including too much of kmsan.h */

    All of this code is under an "#ifndef __ASSEMBLY__" guard, does it matter?

    > +#ifdef CONFIG_KMSAN
    > +void kmsan_unpoison_memory(const void *addr, size_t size);
    > +#endif
    > static inline void clear_page(void *page)
    > {
    > +#ifdef CONFIG_KMSAN
    > + /* alternative_call_2() changes @page. */
    > + void *page_copy = page;
    > +#endif
    > alternative_call_2(clear_page_orig,
    > clear_page_rep, X86_FEATURE_REP_GOOD,
    > clear_page_erms, X86_FEATURE_ERMS,
    > "=D" (page),
    > "0" (page)
    > : "cc", "memory", "rax", "rcx");
    > +#ifdef CONFIG_KMSAN
    > + /* Clear KMSAN shadow for the pages that have it. */
    > + kmsan_unpoison_memory(page_copy, PAGE_SIZE);

    What happens if this is called before the alternative-call? Could this
    (in the interest of simplicity) be moved above it? And if you used the
    kmsan-checks.h header, it also doesn't need any "ifdef CONFIG_KMSAN"
    anymore.

    > +#endif
    > }

    \
     
     \ /
      Last update: 2022-07-12 14:23    [W:5.189 / U:0.144 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site