lkml.org 
[lkml]   [2021]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [PATCH] mm, slub: place the trace before freeing memory in kmem_cache_free()
From
On 10/30/21 03:11, Yunfeng Ye wrote:
> After the memory is freed, it may be allocated by other CPUs and has
> been recorded by trace. So the timing sequence of the memory tracing is
> inaccurate.
>
> For example, we expect the following timing sequeuce:
>
> CPU 0 CPU 1
>
> (1) alloc xxxxxx
> (2) free xxxxxx
> (3) alloc xxxxxx
> (4) free xxxxxx
>
> However, the following timing sequence may occur:
>
> CPU 0 CPU 1
>
> (1) alloc xxxxxx
> (2) alloc xxxxxx
> (3) free xxxxxx
> (4) free xxxxxx
>
> So place the trace before freeing memory in kmem_cache_free().

Hi Yunfeng,

Like Muchun, I had some difficulty with the above description, but
now I think I get it. :)

In order to make it easier for others, how about this wording and subject
line, instead:


mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()

After the memory is freed, it can be immediately allocated by other
CPUs, before the "free" trace report has been emitted. This causes
inaccurate traces.

For example, if the following sequence of events occurs:

CPU 0 CPU 1

(1) alloc xxxxxx
(2) free xxxxxx
(3) alloc xxxxxx
(4) free xxxxxx

...then they will be inaccurately reported via tracing, so that they
appear to have happened in this order. This makes it look like CPU 1
somehow managed to allocate mmemory that CPU 0 still had allocated for
itself:

CPU 0 CPU 1

(1) alloc xxxxxx
(2) alloc xxxxxx
(3) free xxxxxx
(4) free xxxxxx

In order to avoid this, emit the "free xxxxxx" tracing report just
before the actual call to free the memory, instead of just after it.


>
> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
> ---
> mm/slub.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 432145d7b4ec..427e62034c3f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
> s = cache_from_obj(s, x);
> if (!s)
> return;
> - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
> trace_kmem_cache_free(_RET_IP_, x, s->name);
> + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
> }
> EXPORT_SYMBOL(kmem_cache_free);
>

...the diffs seem correct, too, but I'm not exactly a slub reviewer, so
take that for what it's worth.


thanks,
--
John Hubbard
NVIDIA

\
 
 \ /
  Last update: 2021-11-02 08:04    [W:0.129 / U:0.748 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site