lkml.org 
[lkml]   [2004]   [Dec]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [patch] Real-Time Preemption, -RT-2.6.10-rc2-mm3-V0.7.32-6
    Ingo Molnar wrote:
    > * K.R. Foley <kr@cybsft.com> wrote:
    >
    >
    >>Could you explain what the attached trace means. It looks to me like
    >>the trace starts in try_to_wake_up when we are trying to wake amlat,
    >>but then before we finish we get a hit on IRQ 8 and run the IRQ
    >>handler??? Or do I somehow have it backwards? :)
    >

    Thank you. I really did have it backwards. The thing that confused me
    was that trace_start... gets called with the task that we are trying to
    wake up. I didn't follow the trace code far enough to realize that it
    later starts getting task info from current instead of p. :) This all
    makes more sense now.

    I am still confused about one thing, unrelated to this. If RT tasks
    never expire and thus are never moved to the expired array??? Does that
    imply that we never switch the active and expired arrays? If so how do
    tasks that do expire get moved back into the active array?

    >
    >> amlat-4973 0-h.3 0?s : __trace_start_sched_wakeup (try_to_wake_up)
    >> amlat-4973 0-h.3 1?s : _raw_spin_unlock (try_to_wake_up)
    >> amlat-4973 0-h.2 1?s : preempt_schedule (try_to_wake_up)
    >> amlat-4973 0 2?s : wake_up_process <IRQ 8-677> (0 1):
    >
    >
    > this portion shows that amlat-4973 woke up IRQ_8-677. Subsequently the
    > scheduler picked it from a list of 5 tasks:
    >
    >
    >> amlat-4973 0-..2 13?s : trace_array (__schedule)
    >> amlat-4973 0 14?s : __schedule <IRQ 8-677> (0 1):
    >> amlat-4973 0 14?s+: __schedule <amlat-4973> (1 2):
    >> amlat-4973 0 18?s+: __schedule <<unknown-792> (39 3a):
    >> amlat-4973 0 21?s : __schedule <<unknown-4> (69 6e):
    >> amlat-4973 0 21?s : __schedule <<unknown-4854> (73 78):
    >> amlat-4973 0-..2 22?s+: trace_array (__schedule)
    >> IRQ 8-677 0-..2 31?s : __switch_to (__schedule)
    >
    >
    > IRQ_8's RT priority was 1, amlat's priority was 2, so IRQ-8 got
    > selected. (there were also other, SCHED_NORMAL tasks with pid 792, 4 and
    > 4854 in the queue but they did not get selected) [ Note that in reality
    > the O(1) scheduler only considered IRQ_8 when picking the next task,
    > it's the tracer that listed all runnable tasks, to make it easier to
    > validate scheduler logic. This 'list all runnable tasks at schedule()
    > time' tracing is only done if both tracing and rw-deadlock detection is
    > enabled.]
    >
    > in this trace you can see the new RT global balancing in the works as
    > well:
    >
    >
    >> IRQ 8-677 0 32?s : schedule <amlat-4973> (1 0):
    >> IRQ 8-677 0-..2 32?s : finish_task_switch (__schedule)
    >> IRQ 8-677 0-..2 33?s : smp_send_reschedule_allbutself (finish_task_switch)
    >> IRQ 8-677 0-..2 33?s : __bitmap_weight (smp_send_reschedule_allbutself)
    >> IRQ 8-677 0-..2 34?s : __send_IPI_shortcut (smp_send_reschedule_allbutself)
    >
    >
    > here the scheduler noticed that a higher-prio RT task (IRQ_8) preempted
    > a lower-prio but still RT task (amlat), and sent an IPI (inter-processor
    > interrupt) to another CPU in the system so that amlat can run on the
    > other CPU.
    >
    > Ingo
    >

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:08    [W:0.025 / U:119.092 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site