lkml.org 
[lkml]   [2018]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Subject[PATCH v7 17/17] mm: Clear shrinker bit if there are no objects related to memcg
    From
    Date
    To avoid further unneed calls of do_shrink_slab()
    for shrinkers, which already do not have any charged
    objects in a memcg, their bits have to be cleared.

    This patch introduces a lockless mechanism to do that
    without races without parallel list lru add. After
    do_shrink_slab() returns SHRINK_EMPTY the first time,
    we clear the bit and call it once again. Then we restore
    the bit, if the new return value is different.

    Note, that single smp_mb__after_atomic() in shrink_slab_memcg()
    covers two situations:

    1)list_lru_add() shrink_slab_memcg
    list_add_tail() for_each_set_bit() <--- read bit
    do_shrink_slab() <--- missed list update (no barrier)
    <MB> <MB>
    set_bit() do_shrink_slab() <--- seen list update

    This situation, when the first do_shrink_slab() sees set bit,
    but it doesn't see list update (i.e., race with the first element
    queueing), is rare. So we don't add <MB> before the first call
    of do_shrink_slab() instead of this to do not slow down generic
    case. Also, it's need the second call as seen in below in (2).

    2)list_lru_add() shrink_slab_memcg()
    list_add_tail() ...
    set_bit() ...
    ... for_each_set_bit()
    do_shrink_slab() do_shrink_slab()
    clear_bit() ...
    ... ...
    list_lru_add() ...
    list_add_tail() clear_bit()
    <MB> <MB>
    set_bit() do_shrink_slab()

    The barriers guarantees, the second do_shrink_slab()
    in the right side task sees list update if really
    cleared the bit. This case is drawn in the code comment.

    [Results/performance of the patchset]

    After the whole patchset applied the below test shows signify
    increase of performance:

    $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
    $mkdir /sys/fs/cgroup/memory/ct
    $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
    $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i;
    echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs;
    mkdir -p s/$i; mount -t tmpfs $i s/$i;
    touch s/$i/file; done

    Then, 5 sequential calls of drop caches:
    $time echo 3 > /proc/sys/vm/drop_caches

    1)Before:
    0.00user 13.78system 0:13.78elapsed 99%CPU
    0.00user 5.59system 0:05.60elapsed 99%CPU
    0.00user 5.48system 0:05.48elapsed 99%CPU
    0.00user 8.35system 0:08.35elapsed 99%CPU
    0.00user 8.34system 0:08.35elapsed 99%CPU

    2)After
    0.00user 1.10system 0:01.10elapsed 99%CPU
    0.00user 0.00system 0:00.01elapsed 64%CPU
    0.00user 0.01system 0:00.01elapsed 82%CPU
    0.00user 0.00system 0:00.01elapsed 64%CPU
    0.00user 0.01system 0:00.01elapsed 82%CPU

    The results show the performance increases at least in 548 times.

    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    ---
    include/linux/memcontrol.h | 2 ++
    mm/vmscan.c | 25 +++++++++++++++++++++++--
    2 files changed, 25 insertions(+), 2 deletions(-)

    diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
    index b3ae1373c99a..c487e4300b48 100644
    --- a/include/linux/memcontrol.h
    +++ b/include/linux/memcontrol.h
    @@ -1293,6 +1293,8 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,

    rcu_read_lock();
    map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map);
    + /* Pairs with smp mb in shrink_slab() */
    + smp_mb__before_atomic();
    set_bit(shrinker_id, map->map);
    rcu_read_unlock();
    }
    diff --git a/mm/vmscan.c b/mm/vmscan.c
    index 1425907a32dd..dba5f72956c6 100644
    --- a/mm/vmscan.c
    +++ b/mm/vmscan.c
    @@ -597,8 +597,29 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
    continue;

    ret = do_shrink_slab(&sc, shrinker, priority);
    - if (ret == SHRINK_EMPTY)
    - ret = 0;
    + if (ret == SHRINK_EMPTY) {
    + clear_bit(i, map->map);
    + /*
    + * After the shrinker reported that it had no objects to free,
    + * but before we cleared the corresponding bit in the memcg
    + * shrinker map, a new object might have been added. To make
    + * sure, we have the bit set in this case, we invoke the
    + * shrinker one more time and re-set the bit if it reports that
    + * it is not empty anymore. The memory barrier here pairs with
    + * the barrier in memcg_set_shrinker_bit():
    + *
    + * list_lru_add() shrink_slab_memcg()
    + * list_add_tail() clear_bit()
    + * <MB> <MB>
    + * set_bit() do_shrink_slab()
    + */
    + smp_mb__after_atomic();
    + ret = do_shrink_slab(&sc, shrinker, priority);
    + if (ret == SHRINK_EMPTY)
    + ret = 0;
    + else
    + memcg_set_shrinker_bit(memcg, nid, i);
    + }
    freed += ret;

    if (rwsem_is_contended(&shrinker_rwsem)) {
    \
     
     \ /
      Last update: 2018-05-22 12:12    [W:4.676 / U:0.016 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site