lkml.org 
[lkml]   [2021]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [rfc/patch] mm/slub: restore/expand unfreeze_partials() local exclusion scope
From
Date
On Mon, 2021-07-26 at 23:26 +0200, Vlastimil Babka wrote:
> On 7/26/21 7:00 PM, Mike Galbraith wrote:
> >
> > Why not do something like the below?...
>
> Yep, sounds like a good approach, thanks. Percpu partial is not *the*
> SLUB fast path, so it should be sufficient without the lockless cmpxchg
> tricks. Will incorporate in updated series.

Great, my >= 5.13 trees will meanwhile wear it like so:

From: Vlastimil Babka <vbabka@suse.cz>
Date: Fri, 23 Jul 2021 23:17:18 +0200

mm, slub: Fix PREEMPT_RT plus SLUB_CPU_PARTIAL local exclusion

See https://lkml.org/lkml/2021/7/25/185

Mike: Remove ifdefs, make all configs take the straight line path layed
out for RT by Vlastimil in his prospective (now confirmed) fix.

Signed-off-by: Mike Galbraith <efault@gmx.de>
---
mm/slub.c | 79 ++++++++++++++++++++++++++++++++------------------------------
1 file changed, 41 insertions(+), 38 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2437,13 +2437,12 @@ static void __unfreeze_partials(struct k
static void unfreeze_partials(struct kmem_cache *s)
{
struct page *partial_page;
+ unsigned long flags;

- do {
- partial_page = this_cpu_read(s->cpu_slab->partial);
-
- } while (partial_page &&
- this_cpu_cmpxchg(s->cpu_slab->partial, partial_page, NULL)
- != partial_page);
+ local_lock_irqsave(&s->cpu_slab->lock, flags);
+ partial_page = this_cpu_read(s->cpu_slab->partial);
+ this_cpu_write(s->cpu_slab->partial, NULL);
+ local_unlock_irqrestore(&s->cpu_slab->lock, flags);

if (partial_page)
__unfreeze_partials(s, partial_page);
@@ -2480,41 +2479,45 @@ static void put_cpu_partial(struct kmem_
{
#ifdef CONFIG_SLUB_CPU_PARTIAL
struct page *oldpage;
- int pages;
- int pobjects;
-
- slub_get_cpu_ptr(s->cpu_slab);
- do {
- pages = 0;
- pobjects = 0;
- oldpage = this_cpu_read(s->cpu_slab->partial);
-
- if (oldpage) {
- pobjects = oldpage->pobjects;
- pages = oldpage->pages;
- if (drain && pobjects > slub_cpu_partial(s)) {
- /*
- * partial array is full. Move the existing
- * set to the per node partial list.
- */
- unfreeze_partials(s);
- oldpage = NULL;
- pobjects = 0;
- pages = 0;
- stat(s, CPU_PARTIAL_DRAIN);
- }
+ struct page *page_to_unfreeze = NULL;
+ unsigned long flags;
+ int pages = 0, pobjects = 0;
+
+ local_lock_irqsave(&s->cpu_slab->lock, flags);
+
+ if (oldpage = this_cpu_read(s->cpu_slab->partial)) {
+ pobjects = oldpage->pobjects;
+ pages = oldpage->pages;
+ if (drain && pobjects > slub_cpu_partial(s)) {
+ /*
+ * partial array is full. Move the existing
+ * set to the per node partial list.
+ *
+ * Postpone unfreezing until we drop the local
+ * lock to avoid an RT unlock/relock requirement
+ * due to MEMCG __slab_free() recursion.
+ */
+ page_to_unfreeze = oldpage;
+
+ oldpage = NULL;
+ pobjects = 0;
+ pages = 0;
+ stat(s, CPU_PARTIAL_DRAIN);
}
+ }
+
+ pages++;
+ pobjects += page->objects - page->inuse;
+
+ page->pages = pages;
+ page->pobjects = pobjects;
+ page->next = oldpage;

- pages++;
- pobjects += page->objects - page->inuse;
+ this_cpu_write(s->cpu_slab->partial, page);
+ local_unlock_irqrestore(&s->cpu_slab->lock, flags);

- page->pages = pages;
- page->pobjects = pobjects;
- page->next = oldpage;
-
- } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
- != oldpage);
- slub_put_cpu_ptr(s->cpu_slab);
+ if (page_to_unfreeze)
+ __unfreeze_partials(s, page_to_unfreeze);
#endif /* CONFIG_SLUB_CPU_PARTIAL */
}


\
 
 \ /
  Last update: 2021-07-27 06:10    [W:0.138 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site