[lkml]   [2011]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Patch in this message
    SubjectRe: list corruption in the last few days. (block ? crypto ?)
    Hi Linus,

    On Sun, Aug 7, 2011 at 11:58 AM, Pekka Enberg <> wrote:
    >> Christoph, I've been reading the code and spotted two potential issues in
    >> __slab_free(). The first one seems like an off-by-one where our comparison
    >> in deactivate_slab() doesn't match __slab_free.
    >> The other one is remove_full() call in __slab_free() that can get called
    >> even if cache debugging is not enabled.
    >> Hmm?

    On Sun, 7 Aug 2011, Linus Torvalds wrote:
    > I'd like to do -rc1 today, regardless of whether this fixes things or
    > not (-rc1 is already a few days delayed).
    > The patch seems to be a good fix, and a likely candidate for the
    > corruption. Commit log and sign-off? I assume you've given it some
    > testing, even if you couldn't reproduce the original issue?

    No, I haven't tested the patch myself but here's one in proper format in
    case someone wants to test it.


    From 85380c605764927576d6ef54e4e8a3354df05d47 Mon Sep 17 00:00:00 2001
    From: Pekka Enberg <>
    Date: Mon, 8 Aug 2011 07:56:49 +0300
    Subject: [PATCH] slub: Fix partial and full list handling in __slab_free

    Dave Jones and Xiaotian Feng reported SLUB list corruption:

    While I haven't able to reproduce the issue, I spotted two problems in
    __slab_free() during code review:

    - The ->nr_partial check in __slab_free() has an off-by-one bug
    when compared to similar check in deactivate_slab()

    - remove_full() is called even if cache debugging has not been enabled

    Reported-by: Dave Jones <>
    Reported-by: Xiaotian Feng <>
    Signed-off-by: Pekka Enberg <>
    mm/slub.c | 5 +++--
    1 files changed, 3 insertions(+), 2 deletions(-)

    diff --git a/mm/slub.c b/mm/slub.c
    index eb5a8f9..cee8c20 100644
    --- a/mm/slub.c
    +++ b/mm/slub.c
    @@ -2368,7 +2368,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
    if (was_frozen)
    stat(s, FREE_FROZEN);
    else {
    - if (unlikely(!inuse && n->nr_partial > s->min_partial))
    + if (unlikely(!inuse && n->nr_partial >= s->min_partial))
    goto slab_empty;

    @@ -2376,7 +2376,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
    * then add it.
    if (unlikely(!prior)) {
    - remove_full(s, page);
    + if (kmem_cache_debug(s))
    + remove_full(s, page);
    add_partial(n, page, 0);
    stat(s, FREE_ADD_PARTIAL);

     \ /
      Last update: 2011-08-08 07:21    [W:0.024 / U:18.980 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site