lkml.org 
[lkml]   [2009]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: deadlocks if use htb
On Thu, Jan 15, 2009 at 11:46:48AM +0100, Peter Zijlstra wrote:
> On Thu, 2009-01-15 at 09:01 +0000, Jarek Poplawski wrote:
...
> > spin_lock
> > (not this hrtimer's anymore)
> > __remove_hrtimer
> > list_add_tail enqueue_hrtimer
> >
>
> (looking at .28 code)
>
> run_hrtimer_pending() reads like:
>
> while (pending timers) {
> __remove_hrtimer(timer, HRTIMER_STATE_CALLBACK);
> spin_unlock(&cpu_base->lock);
>
> fn(timer);
>
> spin_lock(&cpu_base->lock);
> timer->state &= ~HRTIMER_STATE_CALLBACK; // _should_ result in HRTIMER_STATE_INACTIVE
> if (HRTIMER_RESTART)
> re-queue
> else if (timer->state != INACTIVE) {
> // so another cpu re-queued this timer _while_ we were executing it.
> if (timer is first && !reprogramm) {
> __remove_hrtimer(timer, HRTIMER_STATE_PENDING);
> list_add_tail(timer, &cb_pending);
> }
> }
> }
>
> So in the window where we drop the lock, one can, as you said, have
> another cpu requeue the timer, but the rb_entry and list_entry are free,
> so it should not cause the data corruption we're seeing.
>

Can't they be enqueued to the list (without a lock) and rbtree at the
same time? Then removing is done for the list only?

Jarek P.


\
 
 \ /
  Last update: 2009-01-15 11:57    [W:0.110 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site