lkml.org 
[lkml]   [2019]   [Feb]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v2 20/26] userfaultfd: wp: support write protection for userfault vma range
    Date
    From: Shaohua Li <shli@fb.com>

    Add API to enable/disable writeprotect a vma range. Unlike mprotect,
    this doesn't split/merge vmas.

    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Rik van Riel <riel@redhat.com>
    Cc: Kirill A. Shutemov <kirill@shutemov.name>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Signed-off-by: Shaohua Li <shli@fb.com>
    Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
    [peterx:
    - use the helper to find VMA;
    - return -ENOENT if not found to match mcopy case;
    - use the new MM_CP_UFFD_WP* flags for change_protection
    - check against mmap_changing for failures]
    Signed-off-by: Peter Xu <peterx@redhat.com>
    ---
    include/linux/userfaultfd_k.h | 3 ++
    mm/userfaultfd.c | 54 +++++++++++++++++++++++++++++++++++
    2 files changed, 57 insertions(+)

    diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
    index 765ce884cec0..8f6e6ed544fb 100644
    --- a/include/linux/userfaultfd_k.h
    +++ b/include/linux/userfaultfd_k.h
    @@ -39,6 +39,9 @@ extern ssize_t mfill_zeropage(struct mm_struct *dst_mm,
    unsigned long dst_start,
    unsigned long len,
    bool *mmap_changing);
    +extern int mwriteprotect_range(struct mm_struct *dst_mm,
    + unsigned long start, unsigned long len,
    + bool enable_wp, bool *mmap_changing);

    /* mm helpers */
    static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
    diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
    index fefa81c301b7..529d180bb4d7 100644
    --- a/mm/userfaultfd.c
    +++ b/mm/userfaultfd.c
    @@ -639,3 +639,57 @@ ssize_t mfill_zeropage(struct mm_struct *dst_mm, unsigned long start,
    {
    return __mcopy_atomic(dst_mm, start, 0, len, true, mmap_changing, 0);
    }
    +
    +int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
    + unsigned long len, bool enable_wp, bool *mmap_changing)
    +{
    + struct vm_area_struct *dst_vma;
    + pgprot_t newprot;
    + int err;
    +
    + /*
    + * Sanitize the command parameters:
    + */
    + BUG_ON(start & ~PAGE_MASK);
    + BUG_ON(len & ~PAGE_MASK);
    +
    + /* Does the address range wrap, or is the span zero-sized? */
    + BUG_ON(start + len <= start);
    +
    + down_read(&dst_mm->mmap_sem);
    +
    + /*
    + * If memory mappings are changing because of non-cooperative
    + * operation (e.g. mremap) running in parallel, bail out and
    + * request the user to retry later
    + */
    + err = -EAGAIN;
    + if (mmap_changing && READ_ONCE(*mmap_changing))
    + goto out_unlock;
    +
    + err = -ENOENT;
    + dst_vma = vma_find_uffd(dst_mm, start, len);
    + /*
    + * Make sure the vma is not shared, that the dst range is
    + * both valid and fully within a single existing vma.
    + */
    + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
    + goto out_unlock;
    + if (!userfaultfd_wp(dst_vma))
    + goto out_unlock;
    + if (!vma_is_anonymous(dst_vma))
    + goto out_unlock;
    +
    + if (enable_wp)
    + newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
    + else
    + newprot = vm_get_page_prot(dst_vma->vm_flags);
    +
    + change_protection(dst_vma, start, start + len, newprot,
    + enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
    +
    + err = 0;
    +out_unlock:
    + up_read(&dst_mm->mmap_sem);
    + return err;
    +}
    --
    2.17.1
    \
     
     \ /
      Last update: 2019-02-12 04:01    [W:4.194 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site