lkml.org 
[lkml]   [2023]   [Mar]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v4 7/8] mm: vmscan: remove shrinker_rwsem from synchronize_shrinkers()
From
Am 09.03.23 um 08:06 schrieb Qi Zheng:
> Hi Kirill,
>
> On 2023/3/9 06:39, Kirill Tkhai wrote:
>> On 07.03.2023 09:56, Qi Zheng wrote:
>>> Now there are no readers of shrinker_rwsem, so
>>> synchronize_shrinkers() does not need to hold the
>>> writer of shrinker_rwsem to wait for all running
>>> shinkers to complete, synchronize_srcu() is enough.
>>>
>>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>>> ---
>>>   mm/vmscan.c | 8 ++------
>>>   1 file changed, 2 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index 7aaf6f94ac1b..ac7ab4aa344f 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -796,15 +796,11 @@ EXPORT_SYMBOL(unregister_shrinker);
>>>   /**
>>>    * synchronize_shrinkers - Wait for all running shrinkers to
>>> complete.
>>>    *
>>> - * This is equivalent to calling unregister_shrink() and
>>> register_shrinker(),
>>> - * but atomically and with less overhead. This is useful to
>>> guarantee that all
>>> - * shrinker invocations have seen an update, before freeing memory,
>>> similar to
>>> - * rcu.
>>> + * This is useful to guarantee that all shrinker invocations have
>>> seen an
>>> + * update, before freeing memory.
>>>    */
>>>   void synchronize_shrinkers(void)
>>>   {
>>> -    down_write(&shrinker_rwsem);
>>> -    up_write(&shrinker_rwsem);
>>>       atomic_inc(&shrinker_srcu_generation);
>>>       synchronize_srcu(&shrinker_srcu);
>>>   }
>>
>> Just curious, callers of synchronize_shrinkers() don't want to have
>> parallel register_shrinker() and unregister_shrink() are completed?
>> Here we only should wait for parallel shrink_slab(), correct?
>
> I think yes.
>
> The synchronize_shrinkers() is currently only used by TTM pool.
>
> In TTM pool, a shrinker named "drm-ttm_pool" is registered, and
> the scan_objects callback will pick an entry from its own shrinker_list:
>
> ttm_pool_shrink
> --> spin_lock(&shrinker_lock);
>     pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list);
>     list_move_tail(&pt->shrinker_list, &shrinker_list);
>     spin_unlock(&shrinker_lock);
>
> These entries have been removed from shrinker_list before calling
> synchronize_shrinkers():
>
> ttm_pool_fini
> --> ttm_pool_type_fini
>     --> spin_lock(&shrinker_lock);
>     list_del(&pt->shrinker_list);
>     spin_unlock(&shrinker_lock);
>     synchronize_shrinkers
>
> So IIUC, we only need to wait for the parallel shrink_slab() here. Like
> its comment says:
>
> /* We removed the pool types from the LRU, but we need to also make sure
>  * that no shrinker is concurrently freeing pages from the pool.
>  */

Yes your analyses is completely correct.

I just didn't wanted to add another SRCU into the critical code paths of
the TTM pool just for device hot plug when I have that functionality
already.

We just make sure that no shrinker is running in parallel with
destruction of the pool. Registering and unregistering is harmless.

Regards,
Christian.

>
> + CC: Christian König :)
>
> Thanks,
> Qi

\
 
 \ /
  Last update: 2023-03-27 00:51    [W:0.088 / U:1.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site