Messages in this thread Patch in this message | | | Date | Sun, 30 Aug 2020 14:08:16 -0700 (PDT) | From | Hugh Dickins <> | Subject | [PATCH 4/5] mm: fix check_move_unevictable_pages() on THP |
| |
check_move_unevictable_pages() is used in making unevictable shmem pages evictable: by shmem_unlock_mapping(), drm_gem_check_release_pagevec() and i915/gem check_release_pagevec(). Those may pass down subpages of a huge page, when /sys/kernel/mm/transparent_hugepage/shmem_enabled is "force".
That does not crash or warn at present, but the accounting of vmstats unevictable_pgs_scanned and unevictable_pgs_rescued is inconsistent: scanned being incremented on each subpage, rescued only on the head (since tails already appear evictable once the head has been updated).
5.8 commit 5d91f31faf8e ("mm: swap: fix vmstats for huge page") has established that vm_events in general (and unevictable_pgs_rescued in particular) should count every subpage: so follow that precedent here.
Do this in such a way that if mem_cgroup_page_lruvec() is made stricter (to check page->mem_cgroup is always set), no problem: skip the tails before calling it, and add thp_nr_pages() to vmstats on the head.
Signed-off-by: Hugh Dickins <hughd@google.com> --- Nothing here worth going to stable, since it's just a testing config that is fixed, whose event numbers are not very important; but this will be needed before Alex Shi's warning, and might as well go in now.
The callers of check_move_unevictable_pages() could be optimized, to skip over tails: but Matthew Wilcox has other changes in flight there, so let's skip the optimization for now.
mm/vmscan.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- 5.9-rc2/mm/vmscan.c 2020-08-16 17:32:50.721507348 -0700 +++ linux/mm/vmscan.c 2020-08-28 17:47:10.595580876 -0700 @@ -4260,8 +4260,14 @@ void check_move_unevictable_pages(struct for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; struct pglist_data *pagepgdat = page_pgdat(page); + int nr_pages; + + if (PageTransTail(page)) + continue; + + nr_pages = thp_nr_pages(page); + pgscanned += nr_pages; - pgscanned++; if (pagepgdat != pgdat) { if (pgdat) spin_unlock_irq(&pgdat->lru_lock); @@ -4280,7 +4286,7 @@ void check_move_unevictable_pages(struct ClearPageUnevictable(page); del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); add_page_to_lru_list(page, lruvec, lru); - pgrescued++; + pgrescued += nr_pages; } }
| |