lkml.org 
[lkml]   [2008]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] ring-buffer: less locking and only disable preemption

* Steven Rostedt <rostedt@goodmis.org> wrote:

> Ingo,
>
> These patches need to be put through the ringer. Could you add them to
> your ring-buffer branch, so we can test them out before putting them
> into your master branch.

hey, in fact your latest iteration already tested out so well on a wide
range of boxes that i've merged it all into tip/tracing/core already.

I'll reuse tip/tracing/ring-buffer for these latest 3 patches (merge it
up to tip/tracing/core and add these three patches) but it's a delta,
i.e. the whole ring-buffer approach is ready for prime time i think.

Hm, do we do deallocation of the buffers already when we switch tracers?

> The following patches bring the ring buffer closer to a lockless
> solution. They move the locking only to the actual moving the
> tail/write pointer from one page to the next. Interrupts are now
> enabled during most of the writes.

very nice direction!

> A lot of the locking protection is still within the ftrace
> infrastructure. The last patch takes some of that away.
>
> The function tracer cannot be reentrant just due to the nature that it
> traces everything, and can cause recursion issues.

Correct, and that's by far the yuckiest aspect of it. And there's
another aspect: NMIs. We've still got the tip/tracing/nmisafe angle with
these commits:

d979781: ftrace: mark lapic_wd_event() notrace
c2c27ba: ftrace: ignore functions that cannot be kprobe-ed
431e946: ftrace: do not trace NMI contexts
1eda930: x86, tracing, nmisafe: fix threadinfo_ -> TI_ rename fallout
84c2ca9: sched: sched_clock() improvement: use in_nmi()
0d84b78: x86 NMI-safe INT3 and Page Fault
a04464b: x86_64 page fault NMI-safe
b335389: Change avr32 active count bit
a581cbd: Change Alpha active count bit
eca0999: Stringify support commas

but i'm not yet fully convinced about the NMI angle, the practical cross
section to random lowlevel x86 code is wider than any sched_clock()
impact for example. We might be best off avoiding it: force-disable the
NMI watchdog when we trace?

Ingo


\
 
 \ /
  Last update: 2008-10-04 10:43    [W:0.068 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site