lkml.org 
[lkml]   [2015]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/3] zsmalloc: do not take class lock in zs_pages_to_compact()
Hi,

On (07/16/15 08:38), Minchan Kim wrote:
> > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > > index b10a228..824c182 100644
> > > --- a/mm/zsmalloc.c
> > > +++ b/mm/zsmalloc.c
> > > @@ -1811,9 +1811,7 @@ unsigned long zs_pages_to_compact(struct zs_pool *pool)
> > > if (class->index != i)
> > > continue;
> > >
> > > - spin_lock(&class->lock);
> > > pages_to_free += zs_can_compact(class);
> > > - spin_unlock(&class->lock);
> > > }
> > >
> > > return pages_to_free;
> >
> > This patch still makes sense. Agree?
>
> There is already race window between shrink_count and shrink_slab so
> it would be okay if we return stale stat with removing the lock if
> the difference is not huge.
>
> Even, now we don't obey nr_to_scan of shrinker in zs_shrinker_scan
> so the such accuracy would be pointless.

Yeah, automatic shrinker may work concurrently with the user triggered
one, so it may be hard (time consuming) to release the exact amount of
pages that we returned from _count(). We can look at `sc->nr_to_reclaim'
to avoid releasing more pages than shrinker wants us to release, but
I'd probably prefer to keep the existing behaviour if we were called by
the shrinker.

OK, will resend later today.

-ss


\
 
 \ /
  Last update: 2015-07-16 02:21    [W:0.048 / U:1.532 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site