lkml.org 
[lkml]   [2023]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [linus:master] [mm] f95bdb700b: stress-ng.ramfs.ops_per_sec -88.8% regression
From
Hi Kirill,

On 2023/6/1 18:06, Kirill Tkhai wrote:
> 01.06.2023, 11:34, "Qi Zheng" <qi.zheng@linux.dev
> <mailto:qi.zheng@linux.dev>>:
>
>
>
> On 2023/6/1 08:57, Kirill Tkhai wrote:
>
>  Hi,
>
>  On 30.05.2023 06:07, Qi Zheng wrote:
>
>  Hi,
>
>  On 2023/5/29 20:51, Paul E. McKenney wrote:
>
>  On Mon, May 29, 2023 at 10:39:21AM +0800, Qi Zheng wrote:
>
>
>  [...]
>
>
>  Thanks for such a detailed explanation.
>
>  Now I think we can continue to try to complete the
> idea[1] from
>  Kirill Tkhai. The patch moves heavy
> synchronize_srcu() to delayed
>  work, so it doesn't affect on user-visible
> unregistration speed.
>
>  [1].
> https://lore.kernel.org/lkml/153365636747.19074.12610817307548583381.stgit@localhost.localdomain/ <https://lore.kernel.org/lkml/153365636747.19074.12610817307548583381.stgit@localhost.localdomain/>
>
>
>  A blast from the past!  ;-)
>
>  But yes, moving the long-latency synchronize_srcu()
> off the user-visible
>  critical code path can be even better.
>
>
>  Yeah, I applied these patches  ([PATCH RFC 04/10]~[PATCH
> RFC 10/10],
>  with few conflicts), the ops/s does get back to the
> previous levels.
>
>  I'll continue updating this patchset after doing more testing.
>
>
>  You may also fix the issue using the below generic solution.
>
>  In addition to this we need patch, which calls
> unregister_shrinker_delayed_initiate()
>  instead of unregister_shrinker() in deactivate_locked_super(),
> and calls
>  unregister_shrinker_delayed_finalize() from
> destroy_super_work(). Compilation tested only.
>
>  ---
>    include/linux/shrinker.h | 2 ++
>    mm/vmscan.c | 38 ++++++++++++++++++++++++++++++--------
>    2 files changed, 32 insertions(+), 8 deletions(-)
>  diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
>  index 224293b2dd06..4ba2986716d3 100644
>  --- a/include/linux/shrinker.h
>  +++ b/include/linux/shrinker.h
>  @@ -4,6 +4,7 @@
>
>    #include <linux/atomic.h>
>    #include <linux/types.h>
>  +#include <linux/rwsem.h>
>
>    /*
>     * This struct is used to pass information from page reclaim
> to the shrinkers.
>  @@ -83,6 +84,7 @@ struct shrinker {
>    #endif
>            /* objs pending delete, per node */
>            atomic_long_t *nr_deferred;
>  + struct rw_semaphore rwsem;
>    };
>    #define DEFAULT_SEEKS 2 /* A good number if you don't know
> better. */
>
>  diff --git a/mm/vmscan.c b/mm/vmscan.c
>  index eeca83e28c9b..19fc129771ce 100644
>  --- a/mm/vmscan.c
>  +++ b/mm/vmscan.c
>  @@ -706,6 +706,7 @@ static int __prealloc_shrinker(struct
> shrinker *shrinker)
>            if (!shrinker->nr_deferred)
>                    return -ENOMEM;
>
>  + init_rwsem(&shrinker->rwsem);
>            return 0;
>    }
>
>  @@ -757,7 +758,9 @@ void register_shrinker_prepared(struct
> shrinker *shrinker)
>    {
>            mutex_lock(&shrinker_mutex);
>            list_add_tail_rcu(&shrinker->list, &shrinker_list);
>  + down_write(&shrinker->rwsem);
>            shrinker->flags |= SHRINKER_REGISTERED;
>  + up_write(&shrinker->rwsem);
>            shrinker_debugfs_add(shrinker);
>            mutex_unlock(&shrinker_mutex);
>    }
>  @@ -802,7 +805,7 @@ EXPORT_SYMBOL(register_shrinker);
>    /*
>     * Remove one
>     */
>  -void unregister_shrinker(struct shrinker *shrinker)
>  +void unregister_shrinker_delayed_initiate(struct shrinker
> *shrinker)
>    {
>            struct dentry *debugfs_entry;
>            int debugfs_id;
>  @@ -812,20 +815,33 @@ void unregister_shrinker(struct shrinker
> *shrinker)
>
>            mutex_lock(&shrinker_mutex);
>            list_del_rcu(&shrinker->list);
>  + down_write(&shrinker->rwsem);
>            shrinker->flags &= ~SHRINKER_REGISTERED;
>  + up_write(&shrinker->rwsem);
>            if (shrinker->flags & SHRINKER_MEMCG_AWARE)
>                    unregister_memcg_shrinker(shrinker);
>            debugfs_entry = shrinker_debugfs_detach(shrinker,
> &debugfs_id);
>            mutex_unlock(&shrinker_mutex);
>
>  + shrinker_debugfs_remove(debugfs_entry, debugfs_id); // This
> is moved in your patch
>  +}
>  +EXPORT_SYMBOL(unregister_shrinker_delayed_initiate);
>  +
>  +void unregister_shrinker_delayed_finalize(struct shrinker
> *shrinker)
>  +{
>            atomic_inc(&shrinker_srcu_generation);
>            synchronize_srcu(&shrinker_srcu);
>
>  - shrinker_debugfs_remove(debugfs_entry, debugfs_id);
>  -
>            kfree(shrinker->nr_deferred);
>            shrinker->nr_deferred = NULL;
>    }
>  +EXPORT_SYMBOL(unregister_shrinker_delayed_finalize);
>  +
>  +void unregister_shrinker(struct shrinker *shrinker)
>  +{
>  + unregister_shrinker_delayed_initiate(shrinker);
>  + unregister_shrinker_delayed_finalize(shrinker);
>  +}
>    EXPORT_SYMBOL(unregister_shrinker);
>
>    /**
>  @@ -856,9 +872,14 @@ static unsigned long
> do_shrink_slab(struct shrink_control *shrinkctl,
>                                              : SHRINK_BATCH;
>            long scanned = 0, next_deferred;
>
>  + down_read(&shrinker->rwsem);
>  + if (!(shrinker->flags & SHRINKER_REGISTERED))
>  + goto unlock;
>            freeable = shrinker->count_objects(shrinker, shrinkctl);
>  - if (freeable == 0 || freeable == SHRINK_EMPTY)
>  - return freeable;
>  + if (freeable == 0 || freeable == SHRINK_EMPTY) {
>  + freed = freeable;
>  + goto unlock;
>  + }
>
>            /*
>             * copy the current shrinker scan count into a local
> variable
>  @@ -935,6 +956,8 @@ static unsigned long do_shrink_slab(struct
> shrink_control *shrinkctl,
>             * manner that handles concurrent updates.
>             */
>            new_nr = add_nr_deferred(next_deferred, shrinker,
> shrinkctl);
>  +unlock:
>  + up_read(&shrinker->rwsem);
>
>
> It should be moved after trace_mm_shrink_slab_end().
>
> Could you explain the reason? I don't see the variable it will protect.

We jump to unlock label before trace_mm_shrink_slab_start(), so I
think we should not go to call trace_mm_shrink_slab_end().

>
>
>            trace_mm_shrink_slab_end(shrinker, shrinkctl->nid,
> freed, nr, new_nr, total_scan);
>            return freed;
>  @@ -968,9 +991,8 @@ static unsigned long
> shrink_slab_memcg(gfp_t gfp_mask, int nid,
>                    struct shrinker *shrinker;
>
>                    shrinker = idr_find(&shrinker_idr, i);
>  - if (unlikely(!shrinker || !(shrinker->flags &
> SHRINKER_REGISTERED))) {
>  - if (!shrinker)
>  - clear_bit(i, info->map);
>  + if (unlikely(!shrinker)) {
>  + clear_bit(i, info->map);
>                            continue;
>                    }
>
>
> Keep this as a fast path?
>
> Probably, yes. And also we need 1)down_trylock() instead of down_read()
>  and 2)rwsem_is_contended  in do_shrink_slab().

Agree, although the critical section of the writer of shrinker->rwsem is
very short, this prevents unnecessary sleeps.

>
>
> After applying the above patch, the performance regression problem of
> ops/s can be solved. And it can be guaranteed that the shrinker is not
> running after unregister_shrinker_delayed_initiate(), so the previous
> semantics are not broken.
>
> Keeping old semantics or not is quite subjective, I think. It's possible
> to provide strong arguments for both cases. First is faster, second is
> easier to adopt for users. For me personally the faster approach looks
> better.

Agree. I also like completely lock-less slab shrink.

> >nce the lock granularity of down_read() has changed to the granularity
>
> of per shrinker, it seems that the down_read() perf hotspot will not be
> very high. I'm not quite sure why.
>
> (The test script used is the script in
> https://lore.kernel.org/all/20230313112819.38938-4-zhengqi.arch@bytedance.com/ <https://lore.kernel.org/all/20230313112819.38938-4-zhengqi.arch@bytedance.com/>)
>
> Hmm, possible original shrinker_rwsem had a lot of atomic intersections
> between cpus on down_read(), while with small locks it has not. Are

I guess so.

> CONFIG_ for locks debug are same in original and this case?

Basically yes, I will do some more testing.

Thanks,
Qi

>
>
>    25.28% [kernel] [k] do_shrink_slab
>    21.91% [kernel] [k] pv_native_safe_halt
>    10.81% [kernel] [k] _find_next_bit
>    10.47% [kernel] [k] down_read
>     8.75% [kernel] [k] shrink_slab
>     4.03% [kernel] [k] up_read
>     3.29% [kernel] [k] shrink_lruvec
>     2.75% [kernel] [k] xa_load
>     2.73% [kernel] [k] mem_cgroup_iter
>     2.67% [kernel] [k] shrink_node
>     1.30% [kernel] [k] list_lru_count_one
>
> Thanks,
> Qi
>
>

--
Thanks,
Qi

\
 
 \ /
  Last update: 2023-06-01 12:46    [W:0.049 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site