lkml.org 
[lkml]   [2005]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] Real-Time Preemption, -RT-2.6.12-rc6-V0.7.48-00

* William Weston <weston@sysex.net> wrote:

> _------=> CPU#
> / _-----=> irqs-off
> | / _----=> need-resched
> || / _---=> hardirq/softirq
> ||| / _--=> preempt-depth
> |||| /
> ||||| delay
> cmd pid ||||| time | caller
> \ / ||||| \ | /
[...]
> wmResh-3189 1.... 1us : up_mutex (sock_def_readable)
> wmResh-3189 1.... 2us : __up_mutex (up_mutex)
> <idle>-0 0Dnh2 2us : _raw_spin_unlock (__schedule)
> <idle>-0 0Dnh1 2us!: preempt_schedule (__schedule)
> wmResh-3189 1Dnh. 204us : _raw_spin_lock (__up_mutex)
> wmResh-3189 1Dnh1 205us : _raw_spin_unlock (__up_mutex)
> <idle>-0 0Dnh1 205us : _raw_spin_lock (__schedule)
> wmResh-3189 1.... 205us : _read_unlock (unix_stream_sendmsg)

look at the CPU# column, we have trace entries from both CPU#0 and CPU#1
- and both of them are delayed by 200 usecs! The CPU#0 delay happened in
the idle thread, between preempt_schedule() in __schedule() and
_raw_spin_lock in __schedule(). It's a codepath where there's no
spinning done. The CPU#1 delay happened in the wmResh process, between
__up_mutex()'s entry and the first _raw_spin_lock() it did. This too is
a codepath where no spinning is done. (and even if there was spinning,
the two locks are not the same.)

in other words, since there is no OS-level explanation for the delay,
this can only be an effect of the hardware/system. (Or it could be a bug
in the measurement, but the likelyhood of seeing a 200 usec bump in the
measurement is quite small.)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-06-22 10:25    [from the cache]
©2003-2014 Jasper Spaans. Advertise on this site