lkml.org 
[lkml]   [2015]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH -mm v2 1/3] slub: never fail to shrink cache
On Thu, Jan 29, 2015 at 10:22:16AM -0600, Christoph Lameter wrote:
> On Thu, 29 Jan 2015, Vladimir Davydov wrote:
>
> > Yeah, but the tool just writes 1 to /sys/kernel/slab/cache/shrink, i.e.
> > invokes shrink_store(), and I don't propose to remove slab placement
> > optimization from there. What I propose is to move slab placement
> > optimization from kmem_cache_shrink() to shrink_store(), because other
> > users of kmem_cache_shrink() don't seem to need it at all - they just
> > want to release empty slabs. Such a change wouldn't affect the behavior
> > of `slabinfo -s` at all.
>
> Well we have to go through the chain of partial slabs anyways so its easy
> to do the optimization at that point.

That's true, but we can introduce a separate function that would both
release empty slabs and optimize slab placement, like the patch below
does. It would increase the code size a bit though, so I don't insist.

diff --git a/mm/slub.c b/mm/slub.c
index 1562955fe099..2cd401d82a41 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3359,7 +3359,7 @@ void kfree(const void *x)
EXPORT_SYMBOL(kfree);

/*
- * kmem_cache_shrink removes empty slabs from the partial lists and sorts
+ * shrink_slab_cache removes empty slabs from the partial lists and sorts
* the remaining slabs by the number of items in use. The slabs with the
* most items in use come first. New allocations will then fill those up
* and thus they can be removed from the partial lists.
@@ -3368,7 +3368,7 @@ EXPORT_SYMBOL(kfree);
* being allocated from last increasing the chance that the last objects
* are freed in them.
*/
-int __kmem_cache_shrink(struct kmem_cache *s)
+static int shrink_slab_cache(struct kmem_cache *s)
{
int node;
int i;
@@ -3423,6 +3423,32 @@ int __kmem_cache_shrink(struct kmem_cache *s)
return 0;
}

+static int __kmem_cache_shrink(struct kmem_cache *s)
+{
+ int node;
+ struct kmem_cache_node *n;
+ struct page *page, *t;
+ LIST_HEAD(discard);
+ unsigned long flags;
+ int ret = 0;
+
+ flush_all(s);
+ for_each_kmem_cache_node(s, node, n) {
+ spin_lock_irqsave(&n->list_lock, flags);
+ list_for_each_entry_safe(page, t, &n->partial, lru)
+ if (!page->inuse)
+ list_move(&page->lru, &discard);
+ spin_unlock_irqrestore(&n->list_lock, flags);
+
+ list_for_each_entry_safe(page, t, &discard, lru)
+ discard_slab(s, page);
+
+ if (slabs_node(s, node))
+ ret = 1;
+ }
+ return ret;
+}
+
static int slab_mem_going_offline_callback(void *arg)
{
struct kmem_cache *s;
@@ -4683,7 +4709,7 @@ static ssize_t shrink_store(struct kmem_cache *s,
const char *buf, size_t length)
{
if (buf[0] == '1') {
- int rc = kmem_cache_shrink(s);
+ int rc = shrink_slab_cache(s);

if (rc)
return rc;

\
 
 \ /
  Last update: 2015-01-29 19:41    [W:1.297 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site