lkml.org 
[lkml]   [2024]   [Jan]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] mm/readahead: readahead aggressively if read drops in willneed range
On Sun, Jan 28, 2024 at 10:25:22PM +0800, Ming Lei wrote:
> Since commit 6d2be915e589 ("mm/readahead.c: fix readahead failure for
> memoryless NUMA nodes and limit readahead max_pages"), ADV_WILLNEED
> only tries to readahead 512 pages, and the remained part in the advised
> range fallback on normal readahead.

Does the MAINTAINERS file mean nothing any more?

> If bdi->ra_pages is set as small, readahead will perform not efficient
> enough. Increasing read ahead may not be an option since workload may
> have mixed random and sequential I/O.

I thik there needs to be a lot more explanation than this about what's
going on before we jump to "And therefore this patch is the right
answer".

> @@ -972,6 +974,7 @@ struct file_ra_state {
> unsigned int ra_pages;
> unsigned int mmap_miss;
> loff_t prev_pos;
> + struct maple_tree *need_mt;

No. Embed the struct maple tree. Don't allocate it. What made you
think this was the right approach?


\
 
 \ /
  Last update: 2024-05-27 14:37    [W:0.167 / U:0.576 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site