lkml.org 
[lkml]   [2005]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: RT and Cascade interrupts
From
Date
on den 01.06.2005 Klokka 23:31 (-0400) skreiv john cooper:
> I fully share your frustration of wanting to "use the
> latest patch -- dammit". However there are other practical
> constraints coming into play. This tree has accumulated a
> substantial amount of fixes for scheduler violation assertions
> along with associated testing and has faired well thus far.
> The bug under discussion here is the last major operational
> problem found in the associated testing process. Arriving
> at this point also required development of target specific
> driver/board code so a resync to a later version is not a
> trivial operation. However it would be justifiable in the
> case of encountering at an impasse with the current tree.

My point is that you are considering timer bugs due to synchronization
problems in code which is obviously not designed to accommodate
synchronization. Once that fact is established, one moves on and
considers the code which does support synchronization.

> > Could you then apply the following debugging patch? It should warn you
> > in case something happens to corrupt base->running_timer (something
> > which would screw up del_timer_sync()). I'm not sure that can happen,
> > but it might be worth checking.
>
> Yes, thanks. Though the event trace does not suggest a
> reentrance in __run_timer() but rather a preemption of it
> during the call to rpc_run_timer() by a high priority
> application task in the midst of an RPC. The preempting
> task requeues the timer in the cascade at the tail of
> xprt_transmit(). rpc_run_timer() upon resuming execution
> unconditionally clears the RPC_TASK_HAS_TIMER flag. This
> creates the inconsistent state.

There are NO cases where that is supposed to be allowed to occur. This
case is precisely what del_timer_sync() is supposed to treat.

> No explicit deletion attempt of the timer (synchronous or
> otherwise) is coming into play in the failure scenario as
> witnessed by the event trace. Rather it is the implicit
> dequeue of the timer from the cascade in __run_timer() and
> attempt to track ownership of it in rpc_run_timer() via
> RPC_TASK_HAS_TIMER which is undermined in the case of
> preemption.

No!!! The responsibility for tracking timers that have been dequeued and
that are currently running inside __run_timer() lies fairly and squarely
with del_timer_sync().
There is NOTHING within the RT patches that implies that the existing
callers of del_timer_sync() should be burdened with having to do
additional tracking of pending timers. To do so would be a major change
of the existing API, and would require a lot of justification.

IOW: nobody but you is claiming that the RPC code is trying to deal with
this case by tracking RPC_TASK_HAS_TIMER. That is not its purpose, nor
should it be in the RT case.

> From earlier mail:
>
> > There should be no instances of RPC entering call_transmit() or any
> > other tk_action callback with a pending timer.
>
> My description wasn't clear. The timeout isn't pending
> before call_transmit(). Rather the RPC appears to be
> blocked elsewhere and upon wakeup via __run_timer()/xprt_timer()
> preempts ksoftirqd and does the __rpc_sleep_on()/__mod_timer()
> at the very tail of xprt_transmit().

No!!! How is this supposed to happen? There is only one thread that is
allowed to call rpc_sleep_on(), and that is the exact same thread that
is calling __rpc_execute(). It may call rpc_sleep_on() only from inside
a task->tk_action() call, and therefore only _after_ it has called
rpc_delete_timer(). There is supposed to be strict ordering here!

Trond

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-06-02 06:29    [W:0.064 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site