lkml.org 
[lkml]   [2021]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free
From
On 9/16/21 14:39, Miaohe Lin wrote:
> kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects
> when doing bulk free. So we shouldn't call memcg_slab_free_hook() again
> for bulk free to avoid incorrect memcg slab count.
>
> Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

I now noticed the series doesn't Cc: stable and it should, so I hope Andrew
can add those together with the review tags. Thanks.

> ---
> mm/slub.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index f3df0f04a472..d8f77346376d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
> struct kmem_cache_cpu *c;
> unsigned long tid;
>
> - memcg_slab_free_hook(s, &head, 1);
> + /* memcg_slab_free_hook() is already called for bulk free. */
> + if (!tail)
> + memcg_slab_free_hook(s, &head, 1);
> redo:
> /*
> * Determine the currently cpus per cpu slab.
>

\
 
 \ /
  Last update: 2021-10-05 12:50    [W:0.094 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site