lkml.org 
[lkml]   [2015]   [Jan]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH -mm v2 1/3] slub: never fail to shrink cache
    On Wed, 28 Jan 2015 19:22:49 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

    > SLUB's version of __kmem_cache_shrink() not only removes empty slabs,
    > but also tries to rearrange the partial lists to place slabs filled up
    > most to the head to cope with fragmentation. To achieve that, it
    > allocates a temporary array of lists used to sort slabs by the number of
    > objects in use. If the allocation fails, the whole procedure is aborted.
    >
    > This is unacceptable for the kernel memory accounting extension of the
    > memory cgroup, where we want to make sure that kmem_cache_shrink()
    > successfully discarded empty slabs. Although the allocation failure is
    > utterly unlikely with the current page allocator implementation, which
    > retries GFP_KERNEL allocations of order <= 2 infinitely, it is better
    > not to rely on that.
    >
    > This patch therefore makes __kmem_cache_shrink() allocate the array on
    > stack instead of calling kmalloc, which may fail. The array size is
    > chosen to be equal to 32, because most SLUB caches store not more than
    > 32 objects per slab page. Slab pages with <= 32 free objects are sorted
    > using the array by the number of objects in use and promoted to the head
    > of the partial list, while slab pages with > 32 free objects are left in
    > the end of the list without any ordering imposed on them.
    >
    > ...
    >
    > @@ -3375,51 +3376,56 @@ int __kmem_cache_shrink(struct kmem_cache *s)
    > struct kmem_cache_node *n;
    > struct page *page;
    > struct page *t;
    > - int objects = oo_objects(s->max);
    > - struct list_head *slabs_by_inuse =
    > - kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
    > + LIST_HEAD(discard);
    > + struct list_head promote[SHRINK_PROMOTE_MAX];

    512 bytes of stack. The call paths leading to __kmem_cache_shrink()
    are many and twisty. How do we know this isn't a problem?

    The logic behind choosing "32" sounds rather rubbery. What goes wrong
    if we use, say, "4"?



    \
     
     \ /
      Last update: 2015-01-29 03:01    [W:4.169 / U:0.320 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site