lkml.org 
[lkml]   [2016]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: fadvise: avoid expensive remote LRU cache draining after FADV_DONTNEED
On Mon, Dec 12, 2016 at 10:21:24AM +0100, Vlastimil Babka wrote:
> On 12/10/2016 06:26 PM, Johannes Weiner wrote:
> > When FADV_DONTNEED cannot drop all pages in the range, it observes
> > that some pages might still be on per-cpu LRU caches after recent
> > instantiation and so initiates remote calls to all CPUs to flush their
> > local caches. However, in most cases, the fadvise happens from the
> > same context that instantiated the pages, and any pre-LRU pages in the
> > specified range are most likely sitting on the local CPU's LRU cache,
> > and so in many cases this results in unnecessary remote calls, which,
> > in a loaded system, can hold up the fadvise() call significantly.
>
> Got any numbers for this part?

I didn't record it in the extreme case we observed, unfortunately. We
had a slow-to-respond system and noticed it spending seconds in
lru_add_drain_all() after fadvise calls, and this patch came out of
thinking about the code and how we commonly call FADV_DONTNEED.

FWIW, I wrote a silly directory tree walker/searcher that recurses
through /usr to read and FADV_DONTNEED each file it finds. On a 2
socket 40 ht machine, over 1% is spent in lru_add_drain_all(). With
the patch, that cost is gone; the local drain cost shows at 0.09%.

> > Try to avoid the remote call by flushing the local LRU cache before
> > even attempting to invalidate anything. It's a cheap operation, and
> > the local LRU cache is the most likely to hold any pre-LRU pages in
> > the specified fadvise range.
>
> Anyway it looks like things can't be worse after this patch, so...
>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks!

\
 
 \ /
  Last update: 2016-12-12 16:56    [W:0.068 / U:0.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site