lkml.org 
[lkml]   [2019]   [Aug]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2 v2] mm/zsmalloc.c: Fix race condition in zs_destroy_pool
On Tue, 20 Aug 2019 11:59:39 +0900 Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> wrote:

> On (08/09/19 11:17), Henry Burns wrote:
> > In zs_destroy_pool() we call flush_work(&pool->free_work). However, we
> > have no guarantee that migration isn't happening in the background
> > at that time.
> >
> > Since migration can't directly free pages, it relies on free_work
> > being scheduled to free the pages. But there's nothing preventing an
> > in-progress migrate from queuing the work *after*
> > zs_unregister_migration() has called flush_work(). Which would mean
> > pages still pointing at the inode when we free it.
> >
> > Since we know at destroy time all objects should be free, no new
> > migrations can come in (since zs_page_isolate() fails for fully-free
> > zspages). This means it is sufficient to track a "# isolated zspages"
> > count by class, and have the destroy logic ensure all such pages have
> > drained before proceeding. Keeping that state under the class
> > spinlock keeps the logic straightforward.
> >
> > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
> > Signed-off-by: Henry Burns <henryburns@google.com>
>
> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
>

Thanks. So we have a couple of races which result in memory leaks? Do
we feel this is serious enough to justify a -stable backport of the
fixes?

\
 
 \ /
  Last update: 2019-08-23 01:23    [W:0.430 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site