lkml.org 
[lkml]   [2018]   [Oct]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 4/4] mm: zero-seek shrinkers
Date
On Tue, 2018-10-09 at 14:47 -0400, Johannes Weiner wrote:

> These workloads also deal with tens of thousands of open files and
> use
> /proc for introspection, which ends up growing the proc_inode_cache
> to
> absurdly large sizes - again at the cost of valuable cache space,
> which isn't a reasonable trade-off, given that proc inodes can be
> re-created without involving the disk.
>
> This patch implements a "zero-seek" setting for shrinkers that
> results
> in a target ratio of 0:1 between their objects and IO-backed
> caches. This allows such virtual caches to grow when memory is
> available (they do cache/avoid CPU work after all), but effectively
> disables them as soon as IO-backed objects are under pressure.
>
> It then switches the shrinkers for procfs and sysfs metadata, as well
> as excess page cache shadow nodes, to the new zero-seek setting.

This patch looks like a great step in the right
direction, though I do not know whether it is
aggressive enough.

Given that internal slab fragmentation will
prevent the slab cache from returning a slab to
the VM if just one object in that slab is still
in use, there may well be workloads where we
should just put a hard cap on the number of
freeable items these slabs, and reclaim them
preemptively.

However, I do not know for sure, and this patch
seems like a big improvement over what we had
before, so ...

> Reported-by: Domas Mituzas <dmituzas@fb.com>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Reviewed-by: Rik van Riel <riel@surriel.com>

\
 
 \ /
  Last update: 2018-10-10 03:04    [W:0.092 / U:0.884 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site