Messages in this thread | ![/](/images/icornerl.gif) | | Date | Fri, 5 Apr 2024 09:23:01 +0200 | Subject | Re: [PATCH v3 07/14] mm/ksm: use folio in write_protect_page | From | David Hildenbrand <> |
| |
On 25.03.24 13:48, alexs@kernel.org wrote: > From: "Alex Shi (tencent)" <alexs@kernel.org> > > Compound page is checked and skipped before write_protect_page() called, > use folio to save a few compound_head checking. > > Signed-off-by: Alex Shi (tencent) <alexs@kernel.org> > Cc: Izik Eidus <izik.eidus@ravellosystems.com> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Andrea Arcangeli <aarcange@redhat.com> > Cc: Hugh Dickins <hughd@google.com> > Cc: Chris Wright <chrisw@sous-sol.org> > --- > mm/ksm.c | 22 +++++++++++----------- > 1 file changed, 11 insertions(+), 11 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index 95a487a21eed..5d1f62e7462a 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -1289,22 +1289,22 @@ static u32 calc_checksum(struct page *page) > return checksum; > } > > -static int write_protect_page(struct vm_area_struct *vma, struct page *page, > +static int write_protect_page(struct vm_area_struct *vma, struct folio *folio, > pte_t *orig_pte) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_PAGE_VMA_WALK(pvmw, page, vma, 0, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, 0, 0); > int swapped; > int err = -EFAULT; > struct mmu_notifier_range range; > bool anon_exclusive; > pte_t entry; > > - pvmw.address = page_address_in_vma(page, vma); > + pvmw.address = page_address_in_vma(&folio->page, vma); > if (pvmw.address == -EFAULT) > goto out; > > - BUG_ON(PageTransCompound(page)); > + VM_BUG_ON(folio_test_large(folio));
I suggest
if (WARN_ON_ONCE(folio_test_large(folio))) return err;
before the page_address_in_vma() call.
Reviewed-by: David Hildenbrand <david@redhat.com>
-- Cheers,
David / dhildenb
| ![\](/images/icornerr.gif) |