lkml.org 
[lkml]   [2008]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH 4/5] kmemtrace: SLUB hooks.
    From
    Date
    On Mon, 2008-08-11 at 09:04 -0500, Christoph Lameter wrote:
    > Eduard - Gabriel Munteanu wrote:
    >
    >
    >
    > > static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
    > > {
    > > + void *ret;
    > > +
    > > if (__builtin_constant_p(size) &&
    > > size <= PAGE_SIZE && !(flags & SLUB_DMA)) {
    > > struct kmem_cache *s = kmalloc_slab(size);
    > > @@ -239,7 +280,13 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
    > > if (!s)
    > > return ZERO_SIZE_PTR;
    > >
    > > - return kmem_cache_alloc_node(s, flags, node);
    > > + ret = kmem_cache_alloc_node_notrace(s, flags, node);
    > > +
    > > + kmemtrace_mark_alloc_node(KMEMTRACE_TYPE_KMALLOC,
    > > + _THIS_IP_, ret,
    > > + size, s->size, flags, node);
    > > +
    > > + return ret;
    >
    > You could simplify the stuff in slub.h if you would fall back to the uninlined
    > functions in the case that kmemtrace is enabled. IMHO adding additional inline
    > code here does grow these function to a size where inlining is not useful anymore.

    So, if CONFIG_KMEMTRACE is enabled, make the inlined version go away
    completely? I'm okay with that though I wonder if that means we now take
    a performance hit when CONFIG_KMEMTRACE is enabled but tracing is
    disabled at run-time...

    > > diff --git a/mm/slub.c b/mm/slub.c
    > > index 315c392..940145f 100644
    > > --- a/mm/slub.c
    > > +++ b/mm/slub.c
    > > @@ -23,6 +23,7 @@
    > > #include <linux/kallsyms.h>
    > > #include <linux/memory.h>
    > > #include <linux/math64.h>
    > > +#include <linux/kmemtrace.h>
    > >
    > > /*
    > > * Lock order:
    > > @@ -1652,18 +1653,47 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
    > >
    > > void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags)
    > > {
    > > - return slab_alloc(s, gfpflags, -1, __builtin_return_address(0));
    > > + void *ret = slab_alloc(s, gfpflags, -1, __builtin_return_address(0));
    > > +
    > > + kmemtrace_mark_alloc(KMEMTRACE_TYPE_CACHE, _RET_IP_, ret,
    > > + s->objsize, s->size, gfpflags);
    > > +
    > > + return ret;
    > > }
    >
    > _RET_IP == __builtin_return_address(0) right? Put that into a local variable?
    > At least we need consistent usage within one function. Maybe convert
    > __builtin_return_address(0) to _RET_IP_ within slub?

    I think we should just convert SLUB to use _RET_IP_ everywhere. Eduard,
    care to make a patch and send it and rebase this on top of that?



    \
     
     \ /
      Last update: 2008-08-11 16:11    [W:0.024 / U:5.320 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site