lkml.org 
[lkml]   [2019]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v5 16/25] khugepaged: skip collapse if uffd-wp detected
    Date
    Don't collapse the huge PMD if there is any userfault write protected
    small PTEs. The problem is that the write protection is in small page
    granularity and there's no way to keep all these write protection
    information if the small pages are going to be merged into a huge PMD.

    The same thing needs to be considered for swap entries and migration
    entries. So do the check as well disregarding khugepaged_max_ptes_swap.

    Reviewed-by: Jerome Glisse <jglisse@redhat.com>
    Reviewed-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
    Signed-off-by: Peter Xu <peterx@redhat.com>
    ---
    include/trace/events/huge_memory.h | 1 +
    mm/khugepaged.c | 23 +++++++++++++++++++++++
    2 files changed, 24 insertions(+)

    diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
    index dd4db334bd63..2d7bad9cb976 100644
    --- a/include/trace/events/huge_memory.h
    +++ b/include/trace/events/huge_memory.h
    @@ -13,6 +13,7 @@
    EM( SCAN_PMD_NULL, "pmd_null") \
    EM( SCAN_EXCEED_NONE_PTE, "exceed_none_pte") \
    EM( SCAN_PTE_NON_PRESENT, "pte_non_present") \
    + EM( SCAN_PTE_UFFD_WP, "pte_uffd_wp") \
    EM( SCAN_PAGE_RO, "no_writable_page") \
    EM( SCAN_LACK_REFERENCED_PAGE, "lack_referenced_page") \
    EM( SCAN_PAGE_NULL, "page_null") \
    diff --git a/mm/khugepaged.c b/mm/khugepaged.c
    index 0f7419938008..fc40aa214be7 100644
    --- a/mm/khugepaged.c
    +++ b/mm/khugepaged.c
    @@ -29,6 +29,7 @@ enum scan_result {
    SCAN_PMD_NULL,
    SCAN_EXCEED_NONE_PTE,
    SCAN_PTE_NON_PRESENT,
    + SCAN_PTE_UFFD_WP,
    SCAN_PAGE_RO,
    SCAN_LACK_REFERENCED_PAGE,
    SCAN_PAGE_NULL,
    @@ -1128,6 +1129,15 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
    pte_t pteval = *_pte;
    if (is_swap_pte(pteval)) {
    if (++unmapped <= khugepaged_max_ptes_swap) {
    + /*
    + * Always be strict with uffd-wp
    + * enabled swap entries. Please see
    + * comment below for pte_uffd_wp().
    + */
    + if (pte_swp_uffd_wp(pteval)) {
    + result = SCAN_PTE_UFFD_WP;
    + goto out_unmap;
    + }
    continue;
    } else {
    result = SCAN_EXCEED_SWAP_PTE;
    @@ -1147,6 +1157,19 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
    result = SCAN_PTE_NON_PRESENT;
    goto out_unmap;
    }
    + if (pte_uffd_wp(pteval)) {
    + /*
    + * Don't collapse the page if any of the small
    + * PTEs are armed with uffd write protection.
    + * Here we can also mark the new huge pmd as
    + * write protected if any of the small ones is
    + * marked but that could bring uknown
    + * userfault messages that falls outside of
    + * the registered range. So, just be simple.
    + */
    + result = SCAN_PTE_UFFD_WP;
    + goto out_unmap;
    + }
    if (pte_write(pteval))
    writable = true;

    --
    2.21.0
    \
     
     \ /
      Last update: 2019-06-20 04:23    [W:4.236 / U:0.236 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site