lkml.org 
[lkml]   [2023]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] mm: vmscan: the dirty folio unmap redundantly
Date
If the dirty folio is not reclaimed in the shrink process, it do
not need to unmap, which can save shrinking time during traversaling
the dirty folio.

Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com>
---
mm/vmscan.c | 72 +++++++++++++++++++++++++++--------------------------
1 file changed, 37 insertions(+), 35 deletions(-)
mode change 100644 => 100755 mm/vmscan.c

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2cc0cb41fb32..cf555cdfcefc
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1261,6 +1261,43 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
enum ttu_flags flags = TTU_BATCH_FLUSH;
bool was_swapbacked = folio_test_swapbacked(folio);

+ if (folio_test_dirty(folio)) {
+ /*
+ * Only kswapd can writeback filesystem folios
+ * to avoid risk of stack overflow. But avoid
+ * injecting inefficient single-folio I/O into
+ * flusher writeback as much as possible: only
+ * write folios when we've encountered many
+ * dirty folios, and when we've already scanned
+ * the rest of the LRU for clean folios and see
+ * the same dirty folios again (with the reclaim
+ * flag set).
+ */
+ if (folio_is_file_lru(folio) &&
+ (!current_is_kswapd() ||
+ !folio_test_reclaim(folio) ||
+ !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
+ /*
+ * Immediately reclaim when written back.
+ * Similar in principle to folio_deactivate()
+ * except we already have the folio isolated
+ * and know it's dirty
+ */
+ node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE,
+ nr_pages);
+ folio_set_reclaim(folio);
+
+ goto activate_locked;
+ }
+
+ if (references == FOLIOREF_RECLAIM_CLEAN)
+ goto keep_locked;
+ if (!may_enter_fs(folio, sc->gfp_mask))
+ goto keep_locked;
+ if (!sc->may_writepage)
+ goto keep_locked;
+ }
+
if (folio_test_pmd_mappable(folio))
flags |= TTU_SPLIT_HUGE_PMD;

@@ -1286,41 +1323,6 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,

mapping = folio_mapping(folio);
if (folio_test_dirty(folio)) {
- /*
- * Only kswapd can writeback filesystem folios
- * to avoid risk of stack overflow. But avoid
- * injecting inefficient single-folio I/O into
- * flusher writeback as much as possible: only
- * write folios when we've encountered many
- * dirty folios, and when we've already scanned
- * the rest of the LRU for clean folios and see
- * the same dirty folios again (with the reclaim
- * flag set).
- */
- if (folio_is_file_lru(folio) &&
- (!current_is_kswapd() ||
- !folio_test_reclaim(folio) ||
- !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
- /*
- * Immediately reclaim when written back.
- * Similar in principle to folio_deactivate()
- * except we already have the folio isolated
- * and know it's dirty
- */
- node_stat_mod_folio(folio, NR_VMSCAN_IMMEDIATE,
- nr_pages);
- folio_set_reclaim(folio);
-
- goto activate_locked;
- }
-
- if (references == FOLIOREF_RECLAIM_CLEAN)
- goto keep_locked;
- if (!may_enter_fs(folio, sc->gfp_mask))
- goto keep_locked;
- if (!sc->may_writepage)
- goto keep_locked;
-
/*
* Folio is dirty. Flush the TLB if a writable entry
* potentially exists to avoid CPU writes after I/O
--
2.39.0
\
 
 \ /
  Last update: 2023-10-18 03:30    [W:1.908 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site