lkml.org 
[lkml]   [2019]   [May]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 5/7] mm: rework non-root kmem_cache lifecycle management
On Tue, May 28, 2019 at 01:37:50PM -0400, Waiman Long wrote:
> On 5/28/19 1:08 PM, Vladimir Davydov wrote:
> >> static void flush_memcg_workqueue(struct kmem_cache *s)
> >> {
> >> + /*
> >> + * memcg_params.dying is synchronized using slab_mutex AND
> >> + * memcg_kmem_wq_lock spinlock, because it's not always
> >> + * possible to grab slab_mutex.
> >> + */
> >> mutex_lock(&slab_mutex);
> >> + spin_lock(&memcg_kmem_wq_lock);
> >> s->memcg_params.dying = true;
> >> + spin_unlock(&memcg_kmem_wq_lock);
> > I would completely switch from the mutex to the new spin lock -
> > acquiring them both looks weird.
> >
> >> mutex_unlock(&slab_mutex);
> >>
> >> /*
>
> There are places where the slab_mutex is held and sleeping functions
> like kvzalloc() are called. I understand that taking both mutex and
> spinlocks look ugly, but converting all the slab_mutex critical sections
> to spinlock critical sections will be a major undertaking by itself. So
> I would suggest leaving that for now.

I didn't mean that. I meant taking spin_lock wherever we need to access
the 'dying' flag, even if slab_mutex is held. So that we don't need to
take mutex_lock in flush_memcg_workqueue, where it's used solely for
'dying' synchronization.

\
 
 \ /
  Last update: 2019-05-28 19:40    [W:0.235 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site