lkml.org 
[lkml]   [2008]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v6] Unified trace buffer
    Peter Zijlstra wrote:
    > On Fri, 2008-09-26 at 14:05 -0400, Steven Rostedt wrote:
    >> +static void
    >> +rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned
    >> nr_pages)
    >> +{
    >> + struct page *page;
    >> + struct list_head *p;
    >> + unsigned i;
    >> +
    >> + atomic_inc(&cpu_buffer->record_disabled);
    >
    > You probably want synchronize_sched() here (and similar other places) to
    > ensure any active writer on the corresponding cpu is actually stopped.

    Would it really be done in the buffer layer?
    I think it should be done by each tracer, because buffer layer
    can't ensure truly active writers have stopped.

    Thank you,

    >
    > Which suggests you want to use something like ring_buffer_lock_cpu() and
    > implement that as above.
    >
    >> + for (i = 0; i < nr_pages; i++) {
    >> + BUG_ON(list_empty(&cpu_buffer->pages));
    >> + p = cpu_buffer->pages.next;
    >> + page = list_entry(p, struct page, lru);
    >> + list_del_init(&page->lru);
    >> + __free_page(page);
    >> + }
    >> + BUG_ON(list_empty(&cpu_buffer->pages));
    >> +
    >> + __ring_buffer_reset_cpu(cpu_buffer);
    >> +
    >> + check_pages(cpu_buffer);
    >> +
    >> + atomic_dec(&cpu_buffer->record_disabled);
    >> +
    >> +}
    >

    --
    Masami Hiramatsu

    Software Engineer
    Hitachi Computer Products (America) Inc.
    Software Solutions Division

    e-mail: mhiramat@redhat.com



    \
     
     \ /
      Last update: 2008-09-26 23:21    [W:0.025 / U:1.968 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site