lkml.org 
[lkml]   [2013]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 10/13] tracing/uprobes: Fetch args before reserving a ring buffer
    (2013/09/03 14:44), Namhyung Kim wrote:
    > From: Namhyung Kim <namhyung.kim@lge.com>
    >
    > Fetching from user space should be done in a non-atomic context. So
    > use a per-cpu buffer and copy its content to the ring buffer
    > atomically. Note that we can migrate during accessing user memory
    > thus use a per-cpu mutex to protect concurrent accesses.
    >
    > This is needed since we'll be able to fetch args from an user memory
    > which can be swapped out. Before that uprobes could fetch args from
    > registers only which saved in a kernel space.
    >
    > While at it, use __get_data_size() and store_trace_args() to reduce
    > code duplication.
    >
    > Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
    > Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
    > Cc: Oleg Nesterov <oleg@redhat.com>
    > Cc: zhangwei(Jovi) <jovi.zhangwei@huawei.com>
    > Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
    > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
    > ---
    > kernel/trace/trace_uprobe.c | 97 +++++++++++++++++++++++++++++++++++++--------
    > 1 file changed, 81 insertions(+), 16 deletions(-)
    >
    > diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
    > index 9f2d12d2311d..9ede401759ab 100644
    > --- a/kernel/trace/trace_uprobe.c
    > +++ b/kernel/trace/trace_uprobe.c
    > @@ -530,21 +530,46 @@ static const struct file_operations uprobe_profile_ops = {
    > .release = seq_release,
    > };
    >
    > +static atomic_t uprobe_buffer_ref = ATOMIC_INIT(0);
    > +static void __percpu *uprobe_cpu_buffer;
    > +static DEFINE_PER_CPU(struct mutex, uprobe_cpu_mutex);
    > +
    > static void uprobe_trace_print(struct trace_uprobe *tu,
    > unsigned long func, struct pt_regs *regs)
    > {
    > struct uprobe_trace_entry_head *entry;
    > struct ring_buffer_event *event;
    > struct ring_buffer *buffer;
    > - void *data;
    > - int size, i;
    > + struct mutex *mutex;
    > + void *data, *arg_buf;
    > + int size, dsize, esize;
    > + int cpu;
    > struct ftrace_event_call *call = &tu->p.call;
    >
    > - size = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
    > + dsize = __get_data_size(&tu->p, regs);
    > + esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
    > +
    > + if (WARN_ON_ONCE(!uprobe_cpu_buffer || tu->p.size + dsize > PAGE_SIZE))
    > + return;
    > +
    > + cpu = raw_smp_processor_id();
    > + mutex = &per_cpu(uprobe_cpu_mutex, cpu);
    > + arg_buf = per_cpu_ptr(uprobe_cpu_buffer, cpu);
    > +
    > + /*
    > + * Use per-cpu buffers for fastest access, but we might migrate
    > + * so the mutex makes sure we have sole access to it.
    > + */
    > + mutex_lock(mutex);
    > + store_trace_args(esize, &tu->p, regs, arg_buf, dsize);
    > +
    > + size = esize + tu->p.size + dsize;
    > event = trace_current_buffer_lock_reserve(&buffer, call->event.type,
    > - size + tu->p.size, 0, 0);
    > - if (!event)
    > + size, 0, 0);
    > + if (!event) {
    > + mutex_unlock(mutex);
    > return;

    Just for a maintenance reason, I personally like to use "goto" in this case
    to fold up the mutex_unlock. :)

    Other parts are good for me.

    Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

    Thank you!


    --
    Masami HIRAMATSU
    IT Management Research Dept. Linux Technology Center
    Hitachi, Ltd., Yokohama Research Laboratory
    E-mail: masami.hiramatsu.pt@hitachi.com




    \
     
     \ /
      Last update: 2013-09-03 13:01    [W:7.405 / U:0.468 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site