lkml.org 
[lkml]   [2011]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: rt14: strace -> migrate_disable_atomic imbalance
On 09/22, Peter Zijlstra wrote:
>
> +static void wait_task_inactive_sched_in(struct preempt_notifier *n, int cpu)
> +{
> + struct task_struct *p;
> + struct wait_task_inactive_blocked *blocked =
> + container_of(n, struct wait_task_inactive_blocked, notifier);
> +
> + hlist_del(&n->link);
> +
> + p = ACCESS_ONCE(blocked->waiter);
> + blocked->waiter = NULL;
> + wake_up_process(p);
> +}
> ...
> +static void
> +wait_task_inactive_sched_out(struct preempt_notifier *n, struct task_struct *next)
> +{
> + if (current->on_rq) /* we're not inactive yet */
> + return;
> +
> + hlist_del(&n->link);
> + n->ops = &wait_task_inactive_ops_post;
> + hlist_add_head(&n->link, &next->preempt_notifiers);
> +}

Tricky ;) Yes, the first ->sched_out() is not enough.

> unsigned long wait_task_inactive(struct task_struct *p, long match_state)
> {
> ...
> + rq = task_rq_lock(p, &flags);
> + trace_sched_wait_task(p);
> + if (!p->on_rq) /* we're already blocked */
> + goto done;

This doesn't look right. schedule() clears ->on_rq a long before
__switch_to/etc.

And it seems that we check ->on_cpu above, this is not UP friendly.

>
> - set_current_state(TASK_UNINTERRUPTIBLE);
> - schedule_hrtimeout(&to, HRTIMER_MODE_REL);
> - continue;
> - }
> + hlist_add_head(&blocked.notifier.link, &p->preempt_notifiers);
> + task_rq_unlock(rq, p, &flags);

I thought about reimplementing wait_task_inactive() too, but afaics there
is a problem: why we can't race with p doing register_preempt_notifier() ?
I guess register_ needs rq->lock too.

Oleg.



\
 
 \ /
  Last update: 2011-09-22 16:59    [W:0.127 / U:6.568 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site