lkml.org 
[lkml]   [2022]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Subject[PATCH 0/8] hugetlb: Use new vma mutex for huge pmd sharing synchronization
    Date
    hugetlb fault scalability regressions have recently been reported [1].
    This is not the first such report, as regressions were also noted when
    commit c0d0381ade79 ("hugetlbfs: use i_mmap_rwsem for more pmd sharing
    synchronization") was added [2] in v5.7. At that time, a proposal to
    address the regression was suggested [3] but went nowhere.

    The regression and benefit of this patch series is not evident when
    using the vm_scalability benchmark reported in [2] on a recent kernel.
    Results from running,
    "./usemem -n 48 --prealloc --prefault -O -U 3448054972"

    48 sample Avg
    next-20220822 next-20220822 next-20220822
    unmodified revert i_mmap_sema locking vma sema locking, this series
    -----------------------------------------------------------------------------
    494229 KB/s 495375 KB/s 495573 KB/s

    The recent regression report [1] notes page fault and fork latency of
    shared hugetlb mappings. To measure this, I created two simple programs:
    1) map a shared hugetlb area, write fault all pages, unmap area
    Do this in a continuous loop to measure faults per second
    2) map a shared hugetlb area, write fault a few pages, fork and exit
    Do this in a continuous loop to measure forks per second
    These programs were run on a 48 CPU VM with 320GB memory. The shared
    mapping size was 250GB. For comparison, a single instance of the program
    was run. Then, multiple instances were run in parallel to introduce
    lock contention. Changing the locking scheme results in a significant
    performance benefit.

    test instances unmodified revert vma
    --------------------------------------------------------------------------
    faults per sec 1 397068 403411 394935
    faults per sec 24 68322 83023 82436
    forks per sec 1 2717 2862 2816
    forks per sec 24 404 465 499
    Combined faults 24 1528 69090 59544
    Combined forks 24 337 66 140

    Combined test is when running both faulting program and forking program
    simultaneously.

    Patches 1 and 2 of this series revert c0d0381ade79 and 87bf91d39bb5 which
    depends on c0d0381ade79. Acquisition of i_mmap_rwsem is still required in
    the fault path to establish pmd sharing, so this is moved back to
    huge_pmd_share. With c0d0381ade79 reverted, this race is exposed:

    Faulting thread Unsharing thread
    ... ...
    ptep = huge_pte_offset()
    or
    ptep = huge_pte_alloc()
    ...
    i_mmap_lock_write
    lock page table
    ptep invalid <------------------------ huge_pmd_unshare()
    Could be in a previously unlock_page_table
    sharing process or worse i_mmap_unlock_write
    ...
    ptl = huge_pte_lock(ptep)
    get/update pte
    set_pte_at(pte, ptep)

    Reverting 87bf91d39bb5 exposes races in page fault/file truncation.
    Patches 3 and 4 of this series address those races. This requires
    using the hugetlb fault mutexes for more coordination between the fault
    code and file page removal.

    Patches 5 - 7 add infrastructure for a new vma based rw semaphore that
    will be used for pmd sharing synchronization. The idea is that this
    semaphore will be held in read mode for the duration of fault processing,
    and held in write mode for unmap operations which may call huge_pmd_unshare.
    Acquiring i_mmap_rwsem is also still required to synchronize huge pmd
    sharing. However it is only required in the fault path when setting up
    sharing, and will be acquired in huge_pmd_share().

    Patch 8 makes use of this new vma lock. Unfortunately, the fault code
    and truncate/hole punch code would naturally take locks in the opposite
    order which could lead to deadlock. Since the performance of page faults
    is more important, the truncation/hole punch code is modified to back
    out and take locks in the correct order if necessary.

    [1] https://lore.kernel.org/linux-mm/43faf292-245b-5db5-cce9-369d8fb6bd21@infradead.org/
    [2] https://lore.kernel.org/lkml/20200622005551.GK5535@shao2-debian/
    [3] https://lore.kernel.org/linux-mm/20200706202615.32111-1-mike.kravetz@oracle.com/

    RFC -> v1
    - Addressed many issues pointed out by Miaohe Lin. Thank you! Most
    significant was not attempting to backout pages in fault code due to
    races with truncation (patch 4).
    - Rebased and retested on next-20220822

    Mike Kravetz (8):
    hugetlbfs: revert use i_mmap_rwsem to address page fault/truncate race
    hugetlbfs: revert use i_mmap_rwsem for more pmd sharing
    synchronization
    hugetlb: rename remove_huge_page to hugetlb_delete_from_page_cache
    hugetlb: handle truncate racing with page faults
    hugetlb: rename vma_shareable() and refactor code
    hugetlb: add vma based lock for pmd sharing
    hugetlb: create hugetlb_unmap_file_folio to unmap single file folio
    hugetlb: use new vma_lock for pmd sharing synchronization

    fs/hugetlbfs/inode.c | 364 ++++++++++++++++++++++++++++++----------
    include/linux/hugetlb.h | 38 ++++-
    kernel/fork.c | 6 +-
    mm/hugetlb.c | 354 ++++++++++++++++++++++++++++----------
    mm/memory.c | 2 +
    mm/rmap.c | 114 ++++++++-----
    mm/userfaultfd.c | 14 +-
    7 files changed, 653 insertions(+), 239 deletions(-)


    base-commit: cc2986f4dc67df7e6209e0cd74145fffbd30d693
    --
    2.37.1

    \
     
     \ /
      Last update: 2022-08-24 19:59    [W:4.640 / U:0.580 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site