lkml.org 
[lkml]   [2014]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] lib/percpu_counter.c: disable local irq when updating percpu couter
Date
__percpu_counter_add() may be called in softirq/hardirq handler
(such as, blk_mq_queue_exit() is typically called in hardirq/softirq
handler), so we need to disable local irq when updating the percpu
counter, otherwise counts may be lost.

The patch fixes problem that 'rmmod null_blk' may hang in blk_cleanup_queue()
because of miscounting of request_queue->mq_usage_counter.

Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Fan Du <fan.du@windriver.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
lib/percpu_counter.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index 7473ee3..2b87bc1 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -75,19 +75,19 @@ EXPORT_SYMBOL(percpu_counter_set);
void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
{
s64 count;
+ unsigned long flags;

- preempt_disable();
+ raw_local_irq_save(flags);
count = __this_cpu_read(*fbc->counters) + amount;
if (count >= batch || count <= -batch) {
- unsigned long flags;
- raw_spin_lock_irqsave(&fbc->lock, flags);
+ raw_spin_lock(&fbc->lock);
fbc->count += count;
- raw_spin_unlock_irqrestore(&fbc->lock, flags);
+ raw_spin_unlock(&fbc->lock);
__this_cpu_write(*fbc->counters, 0);
} else {
__this_cpu_write(*fbc->counters, count);
}
- preempt_enable();
+ raw_local_irq_restore(flags);
}
EXPORT_SYMBOL(__percpu_counter_add);

--
1.7.9.5


\
 
 \ /
  Last update: 2014-01-07 12:41    [W:0.033 / U:0.700 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site