[lkml]   [2005]   [Sep]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Patch in this message
    SubjectRe: 2.6.14-rc1-git-now still dying in mm/slab - this time line 1849
    On Wed, 2005-09-21 at 06:33, Christoph Lameter wrote:

    Hi Christoph,
    I have some doubts over this...

    >/On Tue, 20 Sep 2005, Petr Vandrovec wrote:
    >> slab belonging to node#1, while having acquired lock for cachep belonging
    >> to node #0. Due to this check_spinlock_acquired_node(cachep, nodeid) fails
    >> (check_spinlock_acquired_node(cachep, 0) would succeed).
    >Hmmm. If a node runs out of memory then pages from another node may end up
    >on the slab list of a node. But it seems that free_block cannot handle
    >that properly.
    >How are you producing the problem?
    >Could you try the following patch:
    >The numa slab allocator may allocate pages from foreign nodes onto the lists
    >for a particular node if a node runs out of memory. Inspecting the slab->nodeid
    >field will not reflect that the page is now in use for the slabs of another node.

    IMO the slab->nodeid field just lets us know to which nodes list3 is
    this slab attached, irrespective of the node from
    which node the memory was got.

    >/This patch fixes that issue by adding a node field to free_block so that the caller
    >can indicate which node currently uses a slab.
    But the nodeid is already accessible through the slab-descriptor of this
    object, and this nodeid is set in the cache_grow

    >/Also removes the check for the current node from kmalloc_cache_node since the
    >process may shift later to another node which may lead to an allocation on another
    >node than intended.
    Yeah that is possible, but won't putting a check in __cache_alloc_node
    after disabling the interrupt be better, because
    kmalloc_node/kmem_cache_alloc_node can be called at runtime as well, and
    getting the object directly from the slabs, instead of the arraycaches
    may slow up things.
    Thus tweaking the patch a little.

    Thanks & Regards,

    Signed-off-by: Alok N Kataria <>

    Index: linux-2.6.13/mm/slab.c
    --- linux-2.6.13.orig/mm/slab.c 2005-09-24 00:08:00.221900000 +0530
    +++ linux-2.6.13/mm/slab.c 2005-09-24 00:24:12.206645250 +0530
    @@ -2507,16 +2507,12 @@
    #define cache_alloc_debugcheck_after(a,b,objp,d) (objp)

    -static inline void *__cache_alloc(kmem_cache_t *cachep, unsigned int __nocast flags)
    +static inline void *____cache_alloc(kmem_cache_t *cachep, unsigned int __nocast flags)
    - unsigned long save_flags;
    void* objp;
    struct array_cache *ac;

    - cache_alloc_debugcheck_before(cachep, flags);
    - local_irq_save(save_flags);
    + check_irq_off();
    ac = ac_data(cachep);
    if (likely(ac->avail)) {
    @@ -2526,6 +2522,18 @@
    objp = cache_alloc_refill(cachep, flags);
    + return objp;
    +static inline void *__cache_alloc(kmem_cache_t *cachep, unsigned int __nocast flags)
    + unsigned long save_flags;
    + void* objp;
    + cache_alloc_debugcheck_before(cachep, flags);
    + local_irq_save(save_flags);
    + objp = ____cache_alloc(cachep, flags);
    objp = cache_alloc_debugcheck_after(cachep, flags, objp, __builtin_return_address(0));
    return objp;
    @@ -2841,7 +2849,7 @@
    unsigned long save_flags;
    void *ptr;

    - if (nodeid == numa_node_id() || nodeid == -1)
    + if (nodeid == -1)
    return __cache_alloc(cachep, flags);

    if (unlikely(!cachep->nodelists[nodeid])) {
    @@ -2852,6 +2860,8 @@

    cache_alloc_debugcheck_before(cachep, flags);
    + if (nodeid == numa_node_id())
    + ____cache_alloc(cachep, flags);
    ptr = __cache_alloc_node(cachep, flags, nodeid);
    ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, __builtin_return_address(0));
     \ /
      Last update: 2005-09-23 21:34    [W:0.025 / U:0.024 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site