lkml.org 
[lkml]   [2021]   [Feb]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [v7 PATCH 08/12] mm: vmscan: add per memcg shrinker nr_deferred
On Tue, Feb 9, 2021 at 5:40 PM Roman Gushchin <guro@fb.com> wrote:
>
> On Tue, Feb 09, 2021 at 05:25:16PM -0800, Yang Shi wrote:
> > On Tue, Feb 9, 2021 at 5:10 PM Roman Gushchin <guro@fb.com> wrote:
> > >
> > > On Tue, Feb 09, 2021 at 09:46:42AM -0800, Yang Shi wrote:
> > > > Currently the number of deferred objects are per shrinker, but some slabs, for example,
> > > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs.
> > > >
> > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with
> > > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs
> > > > may suffer from over shrink, excessive reclaim latency, etc.
> > > >
> > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs
> > > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache
> > > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim.
> > > >
> > > > We observed this hit in our production environment which was running vfs heavy workload
> > > > shown as the below tracing log:
> > > >
> > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458:
> > > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721
> > > > cache items 246404277 delta 31345 total_scan 123202138
> > > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458:
> > > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602
> > > > last shrinker return val 123186855
> > > >
> > > > The vfs cache and page cache ratio was 10:1 on this machine, and half of caches were dropped.
> > > > This also resulted in significant amount of page caches were dropped due to inodes eviction.
> > > >
> > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring
> > > > better isolation.
> > > >
> > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred
> > > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time.
> > > >
> > > > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > > > ---
> > > > include/linux/memcontrol.h | 7 +++---
> > > > mm/vmscan.c | 49 +++++++++++++++++++++++++-------------
> > > > 2 files changed, 37 insertions(+), 19 deletions(-)
> > > >
> > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > > index 4c9253896e25..c457fc7bc631 100644
> > > > --- a/include/linux/memcontrol.h
> > > > +++ b/include/linux/memcontrol.h
> > > > @@ -93,12 +93,13 @@ struct lruvec_stat {
> > > > };
> > > >
> > > > /*
> > > > - * Bitmap of shrinker::id corresponding to memcg-aware shrinkers,
> > > > - * which have elements charged to this memcg.
> > > > + * Bitmap and deferred work of shrinker::id corresponding to memcg-aware
> > > > + * shrinkers, which have elements charged to this memcg.
> > > > */
> > > > struct shrinker_info {
> > > > struct rcu_head rcu;
> > > > - unsigned long map[];
> > > > + atomic_long_t *nr_deferred;
> > > > + unsigned long *map;
> > > > };
> > > >
> > > > /*
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index a047980536cf..d4b030a0b2a9 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -187,9 +187,13 @@ static DECLARE_RWSEM(shrinker_rwsem);
> > > > #ifdef CONFIG_MEMCG
> > > > static int shrinker_nr_max;
> > > >
> > > > +/* The shrinker_info is expanded in a batch of BITS_PER_LONG */
> > > > #define NR_MAX_TO_SHR_MAP_SIZE(nr_max) \
> > > > (DIV_ROUND_UP(nr_max, BITS_PER_LONG) * sizeof(unsigned long))
> > > >
> > > > +#define NR_MAX_TO_SHR_DEF_SIZE(nr_max) \
> > > > + (round_up(nr_max, BITS_PER_LONG) * sizeof(atomic_long_t))
> > > > +
> > > > static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg,
> > > > int nid)
> > > > {
> > > > @@ -203,10 +207,12 @@ static void free_shrinker_info_rcu(struct rcu_head *head)
> > > > }
> > > >
> > > > static int expand_one_shrinker_info(struct mem_cgroup *memcg,
> > > > - int size, int old_size)
> > > > + int m_size, int d_size,
> > > > + int old_m_size, int old_d_size)
> > > > {
> > > > struct shrinker_info *new, *old;
> > > > int nid;
> > > > + int size = m_size + d_size;
> > > >
> > > > for_each_node(nid) {
> > > > old = shrinker_info_protected(memcg, nid);
> > > > @@ -218,9 +224,15 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg,
> > > > if (!new)
> > > > return -ENOMEM;
> > > >
> > > > - /* Set all old bits, clear all new bits */
> > > > - memset(new->map, (int)0xff, old_size);
> > > > - memset((void *)new->map + old_size, 0, size - old_size);
> > > > + new->nr_deferred = (atomic_long_t *)(new + 1);
> > > > + new->map = (void *)new->nr_deferred + d_size;
> > > > +
> > > > + /* map: set all old bits, clear all new bits */
> > > > + memset(new->map, (int)0xff, old_m_size);
> > > > + memset((void *)new->map + old_m_size, 0, m_size - old_m_size);
> > > > + /* nr_deferred: copy old values, clear all new values */
> > > > + memcpy(new->nr_deferred, old->nr_deferred, old_d_size);
> > > > + memset((void *)new->nr_deferred + old_d_size, 0, d_size - old_d_size);
> > > >
> > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, new);
> > > > call_rcu(&old->rcu, free_shrinker_info_rcu);
> > > > @@ -235,9 +247,6 @@ void free_shrinker_info(struct mem_cgroup *memcg)
> > > > struct shrinker_info *info;
> > > > int nid;
> > > >
> > > > - if (mem_cgroup_is_root(memcg))
> > > > - return;
> > > > -
> > > > for_each_node(nid) {
> > > > pn = mem_cgroup_nodeinfo(memcg, nid);
> > > > info = shrinker_info_protected(memcg, nid);
> > > > @@ -250,12 +259,13 @@ int alloc_shrinker_info(struct mem_cgroup *memcg)
> > > > {
> > > > struct shrinker_info *info;
> > > > int nid, size, ret = 0;
> > > > -
> > > > - if (mem_cgroup_is_root(memcg))
> > > > - return 0;
> > > > + int m_size, d_size = 0;
> > > >
> > > > down_write(&shrinker_rwsem);
> > > > - size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max);
> > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max);
> > > > + d_size = NR_MAX_TO_SHR_DEF_SIZE(shrinker_nr_max);
> > > > + size = m_size + d_size;
> > > > +
> > > > for_each_node(nid) {
> > > > info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid);
> > > > if (!info) {
> > > > @@ -263,6 +273,8 @@ int alloc_shrinker_info(struct mem_cgroup *memcg)
> > > > ret = -ENOMEM;
> > > > break;
> > > > }
> > > > + info->nr_deferred = (atomic_long_t *)(info + 1);
> > > > + info->map = (void *)info->nr_deferred + d_size;
> > > > rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
> > > > }
> > > > up_write(&shrinker_rwsem);
> > > > @@ -274,10 +286,16 @@ static int expand_shrinker_info(int new_id)
> > > > {
> > > > int size, old_size, ret = 0;
> > > > int new_nr_max = new_id + 1;
> > > > + int m_size, d_size = 0;
> > > > + int old_m_size, old_d_size = 0;
> > > > struct mem_cgroup *memcg;
> > > >
> > > > - size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max);
> > > > - old_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max);
> > > > + m_size = NR_MAX_TO_SHR_MAP_SIZE(new_nr_max);
> > > > + d_size = NR_MAX_TO_SHR_DEF_SIZE(new_nr_max);
> > > > + size = m_size + d_size;
> > > > + old_m_size = NR_MAX_TO_SHR_MAP_SIZE(shrinker_nr_max);
> > > > + old_d_size = NR_MAX_TO_SHR_DEF_SIZE(shrinker_nr_max);
> > > > + old_size = old_m_size + old_d_size;
> > > > if (size <= old_size)
> > > > goto out;
> > >
> > > It looks correct, but a bit bulky. Can we check that the new maximum
> > > number of elements is larger than then the old one here?
> >
> > Seems not to me. For example, we have shrinker_nr_max as 1, then a new
> > shrinker is registered and the new_nr_max is 2, but actually the new
> > size is equal to the old size.
>
> I see.
>
> >
> > We should be able to do:
> > if (round_up(new_nr_max, BITS_PER_LONG) <= round_up(shrinker_nr_mx,
> > BITS_PER_LONG))
> >
> > Does it seem better?
>
> Yes, I think so.
>
> >
> > >
> > > >
> > > > @@ -286,9 +304,8 @@ static int expand_shrinker_info(int new_id)
> > > >
> > > > memcg = mem_cgroup_iter(NULL, NULL, NULL);
> > > > do {
> > > > - if (mem_cgroup_is_root(memcg))
> > > > - continue;
> > > > - ret = expand_one_shrinker_info(memcg, size, old_size);
> > > > + ret = expand_one_shrinker_info(memcg, m_size, d_size,
> > > > + old_m_size, old_d_size);
> > >
> > > Pass the old and the new numbers to expand_one_shrinker_info() and
> > > have all size manipulation there?
> >
> > With the above proposal we could move the size manipulation right
> > before the memcg iter, we could save some cycles if we don't have to
> > expand it.
>
> I mostly dislike passing 4 arguments to expand_one_shrinker_info():
> old_m_size, old_d_size, etc. But you're right, there is no good reason
> to calculate them for each cgroup, if we can do it once. Can you, please,
> rename arguments to map_size and defer_size (or something more obvious than
> m and d on your taste)?

Yes, sure. map_size/defer_size and old_map_size, old_defer_size seem
good to me as well.

>
> Thanks!

\
 
 \ /
  Last update: 2021-02-10 03:30    [W:1.041 / U:1.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site