lkml.org 
[lkml]   [2016]   [Jan]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2] lightnvm: fix rrpc_lun_gc
From
Date
On 12/31/2015 05:40 AM, Wenwei Tao wrote:
> This patch fix two issues in rrpc_lun_gc
>
> 1. prio_list is protected by rrpc_lun's lock not nvm_lun's, so
> acquire rlun's lock instead of lun's before operate on the list.
>
> 2. we delete block from prio_list before allocating gcb, but gcb
> allocation may fail, we end without putting it back to the list,
> this makes the block won't get reclaimed in the future. To solve
> this issue, delete block after gcb allocation.
>
> Signed-off-by: Wenwei Tao <ww.tao0320@gmail.com>
> ---
>
> Changed in v2:
> -Advance the gcb allocation, make the debug log deliver
> the correct message.
>
> drivers/lightnvm/rrpc.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
> index 67b14d4..40b0309 100644
> --- a/drivers/lightnvm/rrpc.c
> +++ b/drivers/lightnvm/rrpc.c
> @@ -443,7 +443,7 @@ static void rrpc_lun_gc(struct work_struct *work)
> if (nr_blocks_need < rrpc->nr_luns)
> nr_blocks_need = rrpc->nr_luns;
>
> - spin_lock(&lun->lock);
> + spin_lock(&rlun->lock);
> while (nr_blocks_need > lun->nr_free_blocks &&
> !list_empty(&rlun->prio_list)) {
> struct rrpc_block *rblock = block_prio_find_max(rlun);
> @@ -452,16 +452,16 @@ static void rrpc_lun_gc(struct work_struct *work)
> if (!rblock->nr_invalid_pages)
> break;
>
> + gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
> + if (!gcb)
> + break;
> +
> list_del_init(&rblock->prio);
>
> BUG_ON(!block_is_full(rrpc, rblock));
>
> pr_debug("rrpc: selected block '%lu' for GC\n", block->id);
>
> - gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC);
> - if (!gcb)
> - break;
> -
> gcb->rrpc = rrpc;
> gcb->rblk = rblock;
> INIT_WORK(&gcb->ws_gc, rrpc_block_gc);
> @@ -470,7 +470,7 @@ static void rrpc_lun_gc(struct work_struct *work)
>
> nr_blocks_need--;
> }
> - spin_unlock(&lun->lock);
> + spin_unlock(&rlun->lock);
>
> /* TODO: Hint that request queue can be started again */
> }
>
Thanks, applied for 4.5. I've changed the title a bit.


\
 
 \ /
  Last update: 2016-01-04 12:01    [W:0.041 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site