lkml.org 
[lkml]   [2013]   [Sep]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] staging: zram: minimize `slot_free_lock' usage (v2)
> > Calling handle_pending_slot_free() for every RW operation may
> > cause unneccessary slot_free_lock locking, because most likely
> > process will see NULL slot_free_rq. handle_pending_slot_free()
> > only when current detects that slot_free_rq is not NULL.
> >
> > v2: protect handle_pending_slot_free() with zram rw_lock.
> >
>
> zram->slot_free_lock protects zram->slot_free_rq but shouldn't the zram
> rw_lock be wrapped around the whole operation like the original code
> does? I don't know the zram code, but the original looks like it makes
> sense but in this one it looks like the locks are duplicative.
>
> Is the down_read() in the original code be changed to down_write()?
>

I'm not touching locking around existing READ/WRITE commands.

the original code:

static void handle_pending_slot_free(struct zram *zram)
{
struct zram_slot_free *free_rq;

spin_lock(&zram->slot_free_lock);
while (zram->slot_free_rq) {
free_rq = zram->slot_free_rq;
zram->slot_free_rq = free_rq->next;
zram_free_page(zram, free_rq->index);
kfree(free_rq);
}
spin_unlock(&zram->slot_free_lock);
}

static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
int offset, struct bio *bio, int rw)
{
int ret;

if (rw == READ) {
down_read(&zram->lock);
handle_pending_slot_free(zram);
ret = zram_bvec_read(zram, bvec, index, offset, bio);
up_read(&zram->lock);
} else {
down_write(&zram->lock);
handle_pending_slot_free(zram);
ret = zram_bvec_write(zram, bvec, index, offset);
up_write(&zram->lock);
}

return ret;
}



the new one:

static void handle_pending_slot_free(struct zram *zram)
{
struct zram_slot_free *free_rq;

down_write(&zram->lock);
spin_lock(&zram->slot_free_lock);
while (zram->slot_free_rq) {
free_rq = zram->slot_free_rq;
zram->slot_free_rq = free_rq->next;
zram_free_page(zram, free_rq->index);
kfree(free_rq);
}
spin_unlock(&zram->slot_free_lock);
up_write(&zram->lock);
}

static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
int offset, struct bio *bio, int rw)
{
int ret;

if (zram->slot_free_rq)
handle_pending_slot_free(zram);

if (rw == READ) {
down_read(&zram->lock);
ret = zram_bvec_read(zram, bvec, index, offset, bio);
up_read(&zram->lock);
} else {
down_write(&zram->lock);
ret = zram_bvec_write(zram, bvec, index, offset);
up_write(&zram->lock);
}

return ret;
}


both READ and WRITE operations are still protected by down_read() for READ path
and down_write() for WRITE path. however, there is no handle_pending_slot_free()
and zram->slot_free_lock locking on every READ/WRITE, instead handle_pending_slot_free()
is called only when zram->slot_free_rq is not NULL. handle_pending_slot_free() in
turn protects zram_free_page() call by down_write(), so no READ/WRITE operations
are affected.

-ss


\
 
 \ /
  Last update: 2013-09-09 15:01    [W:0.296 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site