lkml.org 
[lkml]   [2017]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [for-next][PATCH 0/5] selftests: ftrace: Tracing updates to allow instance testing
On Wed, 26 Apr 2017 11:07:52 +0900
Namhyung Kim <namhyung@kernel.org> wrote:

> Hi Steve,
>
> On Tue, Apr 25, 2017 at 09:24:40AM -0400, Steven Rostedt wrote:
> > git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
> > for-next
> >
> > Head SHA1: d6322f6cc483bd512efd3360fa76d0286a5b528b
> >
> >
> > Steven Rostedt (VMware) (5):
> > selftests: ftrace: Allow some tests to be run in a tracing instance
> > selftests: ftrace: Make func_event_triggers and func_traceonoff_triggers tests do instances
> > selftests: ftrace: Have event tests also run in an tracing instance
> > selftests: ftrace: Have some basic tests run in a tracing instance too
> > selftests: ftrace: Allow some event trigger tests to run in an instance
>
> Acked-by: Namhyung Kim <namhyung@kernel.org>
>
>
> But I saw a warning during the below test. I used the for-next branch
> but the commit id was different:
>
> c486bbae781cce337a31cbbbc609f2313ab54542 ("Merge branch
> 'trace/ftrace/core' into trace/ftrace/next").

Yeah, my for-next branch is a merge of urgent and my branch. I don't do
rebases, but this is actually a reset, of a new merge between my two
branches.

>
> Thanks,
> Namhyung
>
>
> [2] Basic test for tracers
> [ 908.674961]
> [ 908.675178] ===============================
> [ 908.675611] [ ERR: suspicious RCU usage. ]
> [ 908.675922] 4.11.0-rc5kvm+ #230 Tainted: G W
> [ 908.675922] -------------------------------
> [ 908.675922] /home/namhyung/project/linux/include/linux/rcupdate.h:842 rcu_read_lock() used illegally while idle!
> [ 908.675922]
> [ 908.675922] other info that might help us debug this:
> [ 908.675922]
> [ 908.675922]
> [ 908.675922] RCU used illegally from idle CPU!
> [ 908.675922] rcu_scheduler_active = 2, debug_locks = 0
> [ 908.675922] RCU used illegally from extended quiescent state!
> [ 908.675922] 2 locks held by swapper/0/0:
> [ 908.675922] #0: (max_trace_lock){......}, at: [<ffffffff81143c33>] check_critical_timing+0xa3/0x150
> [ 908.675922] #1: (rcu_read_lock){......}, at: [<ffffffff8111f9b5>] __is_insn_slot_addr+0x5/0x120
> [ 908.675922]
> [ 908.675922] stack backtrace:
> [ 908.675922] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.11.0-rc5kvm+ #230
> [ 908.675922] Call Trace:
> [ 908.675922] dump_stack+0x68/0x93
> [ 908.675922] lockdep_rcu_suspicious+0xe0/0x100
> [ 908.675922] ? start_kernel+0x3b7/0x3c4
> [ 908.675922] __is_insn_slot_addr+0xe4/0x120
> [ 908.675922] ? start_kernel+0x3b7/0x3c4
> [ 908.675922] __kernel_text_address+0x69/0xb0
> [ 908.675922] ? start_kernel+0x3b7/0x3c4
> [ 908.675922] unwind_get_return_address+0x47/0x60
> [ 908.675922] __save_stack_trace+0x83/0xd0
> [ 908.675922] save_stack_trace+0x1b/0x20
> [ 908.675922] __ftrace_trace_stack+0x22e/0x240
> [ 908.675922] __trace_stack+0x10/0x20
> [ 908.675922] check_critical_timing+0xe9/0x150

Ah, this is a different issue from the stack tracing one. This is irq
tracing. I wonder if we should have that stop latency tracing when RCU
is not functioning.

-- Steve

> [ 908.675922] ? do_idle+0x88/0x120
> [ 908.675922] ? default_idle_call+0x23/0x37
> [ 908.675922] ? do_idle+0x88/0x120
> [ 908.675922] stop_critical_timings+0xcf/0xe0
> [ 908.675922] default_idle_call+0x23/0x37
> [ 908.675922] do_idle+0x88/0x120
> [ 908.675922] cpu_startup_entry+0x1d/0x20
> [ 908.675922] rest_init+0x130/0x140
> [ 908.675922] start_kernel+0x3b7/0x3c4
> [ 908.675922] x86_64_start_reservations+0x2f/0x31
> [ 908.675922] x86_64_start_kernel+0x17d/0x190
> [ 908.675922] start_cpu+0x14/0x14
> [ 908.675922] ? start_cpu+0x14/0x14
> [PASS]

\
 
 \ /
  Last update: 2017-04-26 14:53    [W:0.292 / U:1.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site