lkml.org 
[lkml]   [2023]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v2 2/8] mm/hugetlb: Prepare hugetlb_follow_page_mask() for FOLL_PIN
    Date
    follow_page() doesn't use FOLL_PIN, meanwhile hugetlb seems to not be the
    target of FOLL_WRITE either. However add the checks.

    Namely, either the need to CoW due to missing write bit, or proper CoR on
    !AnonExclusive pages over R/O pins to reject the follow page. That brings
    this function closer to follow_hugetlb_page().

    So we don't care before, and also for now. But we'll care if we switch
    over slow-gup to use hugetlb_follow_page_mask(). We'll also care when to
    return -EMLINK properly, as that's the gup internal api to mean "we should
    do CoR". Not really needed for follow page path, though.

    When at it, switching the try_grab_page() to use WARN_ON_ONCE(), to be
    clear that it just should never fail.

    Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
    Signed-off-by: Peter Xu <peterx@redhat.com>
    ---
    mm/hugetlb.c | 24 +++++++++++++++---------
    1 file changed, 15 insertions(+), 9 deletions(-)

    diff --git a/mm/hugetlb.c b/mm/hugetlb.c
    index f75f5e78ff0b..9a6918c4250a 100644
    --- a/mm/hugetlb.c
    +++ b/mm/hugetlb.c
    @@ -6463,13 +6463,6 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
    spinlock_t *ptl;
    pte_t *pte, entry;

    - /*
    - * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
    - * follow_hugetlb_page().
    - */
    - if (WARN_ON_ONCE(flags & FOLL_PIN))
    - return NULL;
    -
    hugetlb_vma_lock_read(vma);
    pte = hugetlb_walk(vma, haddr, huge_page_size(h));
    if (!pte)
    @@ -6478,8 +6471,21 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
    ptl = huge_pte_lock(h, mm, pte);
    entry = huge_ptep_get(pte);
    if (pte_present(entry)) {
    - page = pte_page(entry) +
    - ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
    + page = pte_page(entry);
    +
    + if (gup_must_unshare(vma, flags, page)) {
    + /* Tell the caller to do Copy-On-Read */
    + page = ERR_PTR(-EMLINK);
    + goto out;
    + }
    +
    + if ((flags & FOLL_WRITE) && !pte_write(entry)) {
    + page = NULL;
    + goto out;
    + }
    +
    + page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
    +
    /*
    * Note that page may be a sub-page, and with vmemmap
    * optimizations the page struct may be read only.
    --
    2.40.1
    \
     
     \ /
      Last update: 2023-06-20 01:12    [W:5.437 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site