[lkml]   [2009]   [Jun]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Patch in this message
    Subject[patch 102/108] lib/genalloc.c: remove unmatched write_lock() in gen_pool_destroy
    2.6.30-stable review patch.  If anyone has any objections, please let us know.


    From: Zygo Blaxell <>

    commit 8e8a2dea0ca91fe2cb7de7ea212124cfe8c82c35 upstream.

    There is a call to write_lock() in gen_pool_destroy which is not balanced
    by any corresponding write_unlock(). This causes problems with preemption
    because the preemption-disable counter is incremented in the write_lock()
    call, but never decremented by any call to write_unlock(). This bug is
    gen_pool_destroy, and one of them is non-x86 arch-specific code.

    Signed-off-by: Zygo Blaxell <>
    Cc: Jiri Kosina <>
    Cc: Steve Wise <>
    Signed-off-by: Andrew Morton <>
    Signed-off-by: Linus Torvalds <>
    Signed-off-by: Greg Kroah-Hartman <>

    lib/genalloc.c | 1 -
    1 file changed, 1 deletion(-)

    --- a/lib/genalloc.c
    +++ b/lib/genalloc.c
    @@ -85,7 +85,6 @@ void gen_pool_destroy(struct gen_pool *p
    int bit, end_bit;

    - write_lock(&pool->lock);
    list_for_each_safe(_chunk, _next_chunk, &pool->chunks) {
    chunk = list_entry(_chunk, struct gen_pool_chunk, next_chunk);

     \ /
      Last update: 2009-07-01 03:33    [W:0.021 / U:37.604 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site