lkml.org 
[lkml]   [2013]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] zram: bug fix: delay lock holding in zram_slot_free_noity
Hello Greg,

On Fri, Aug 09, 2013 at 04:39:08PM -0700, Greg Kroah-Hartman wrote:
> On Tue, Aug 06, 2013 at 01:26:34AM +0900, Minchan Kim wrote:
> > On Mon, Aug 05, 2013 at 04:18:34PM +0900, Minchan Kim wrote:
> > > I was preparing to promote zram and it was almost done.
> > > Before sending patch, I tried to test and eyebrows went up.
> > >
> > > [1] introduced down_write in zram_slot_free_notify to prevent race
> > > between zram_slot_free_notify and zram_bvec_[read|write]. The race
> > > could happen if somebody who has right permission to open swap device
> > > is reading swap device while it is used by swap in parallel.
> > >
> > > However, zram_slot_free_notify is called with holding spin_lock of
> > > swap layer so we shouldn't avoid holing mutex. Otherwise, lockdep
> > > warns it.
> > >
> > > I guess, best solution is to redesign zram lock scheme totally but
> > > we are on the verge of promoting so it's not desirable to change a lot
> > > critical code and such big change isn't good shape for backporting to
> > > stable trees so I think the simple patch is best at the moment.
> > >
> > > [1] [57ab0485, zram: use zram->lock to protect zram_free_page()
> > > in swap free notify path]
> > >
> > > Cc: Jiang Liu <jiang.liu@huawei.com>
> > > Cc: Nitin Gupta <ngupta@vflare.org>
> > > Cc: stable@vger.kernel.org
> > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > ---
> > > drivers/staging/zram/zram_drv.c | 15 ++++++++++++++-
> > > 1 file changed, 14 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
> > > index 7ebf91d..7b574c4 100644
> > > --- a/drivers/staging/zram/zram_drv.c
> > > +++ b/drivers/staging/zram/zram_drv.c
> > > @@ -440,6 +440,13 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> > > goto out;
> > > }
> > >
> > > + /*
> > > + * zram_slot_free_notify could miss free so that let's
> > > + * double check.
> > > + */
> > > + if (unlikely(meta->table[index].handle))
> > > + zram_free_page(zram, index);
> > > +
> > > ret = lzo1x_1_compress(uncmem, PAGE_SIZE, src, &clen,
> > > meta->compress_workmem);
> > >
> > > @@ -727,7 +734,13 @@ static void zram_slot_free_notify(struct block_device *bdev,
> > > struct zram *zram;
> > >
> > > zram = bdev->bd_disk->private_data;
> > > - down_write(&zram->lock);
> > > + /*
> > > + * The function is called in atomic context so down_write should
> > > + * be prohibited. If we couldn't hold a mutex, the free could be
> > > + * handled by zram_bvec_write later when same index is overwritten.
> > > + */
> > > + if (!down_write_trylock(&zram->lock))
> > > + return;
> > > zram_free_page(zram, index);
> > > up_write(&zram->lock);
> > > atomic64_inc(&zram->stats.notify_free);
> > > --
> > > 1.7.9.5
> > >
> >
> > How about this version?
>
> I'm guessing you tested it out? If so, please resend in a format that I
> can apply it in.

Sure, I will post soon.
Thanks!

--
Kind regards,
Minchan Kim


\
 
 \ /
  Last update: 2013-08-12 06:01    [W:0.246 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site