lkml.org 
[lkml]   [2014]   [Jun]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: workqueue: WARN at at kernel/workqueue.c:2176
On 06/05/2014 06:54 AM, Lai Jiangshan wrote:
> The patch is not tested by Jason, I don't know whether the patch fix the problem.
> The changlog including the "Reported-by:" and "Tested-by:" need to be updated
> after it is proved.
>

With this patch, my workload ran overnight without hitting the warning.
This seems promising. I would like to run a day or two more before
declaring success though. Just to be sure :).

> ------------
>
> Subject: [PATCH] sched: migrate the waking tasks
>
> Current code skips to migrate the waking task silently when TTWU_QUEUE is enabled.
>
> When a task is waking, it is pending on the wake_list of the rq, but
> it is not on queue (task->on_rq == 0). In this case, set_cpus_allowed_ptr()
> and __migrate_task() will not migrate it due to it is not on queue.
>
> This behavior is incorrect, because the task had been already waken-up, it will
> be running on the wrong CPU without correct placement until the next wake-up
> or update for cpus_allowed.
>
> To fix this problem, we need to make the waking tasks on-queue (transfer
> the waking tasks to running state) before migrate them.
>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 268a45e..d05a5a1 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1474,20 +1474,24 @@ static int ttwu_remote(struct task_struct *p, int wake_flags)
> }
>
> #ifdef CONFIG_SMP
> -static void sched_ttwu_pending(void)
> +static void sched_ttwu_pending_locked(struct rq *rq)
> {
> - struct rq *rq = this_rq();
> struct llist_node *llist = llist_del_all(&rq->wake_list);
> struct task_struct *p;
>
> - raw_spin_lock(&rq->lock);
> -
> while (llist) {
> p = llist_entry(llist, struct task_struct, wake_entry);
> llist = llist_next(llist);
> ttwu_do_activate(rq, p, 0);
> }
> +}
>
> +static void sched_ttwu_pending(void)
> +{
> + struct rq *rq = this_rq();
> +
> + raw_spin_lock(&rq->lock);
> + sched_ttwu_pending_locked(rq);
> raw_spin_unlock(&rq->lock);
> }
>
> @@ -4530,6 +4534,11 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
> goto out;
>
> dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
> +
> + /* Ensure it is on rq for migration if it is waking */
> + if (p->state == TASK_WAKING)
> + sched_ttwu_pending_locked(rq);
> +
> if (p->on_rq) {
> struct migration_arg arg = { p, dest_cpu };
> /* Need help from migration thread: drop lock and wait. */
> @@ -4576,6 +4585,10 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
> if (!cpumask_test_cpu(dest_cpu, tsk_cpus_allowed(p)))
> goto fail;
>
> + /* Ensure it is on rq for migration if it is waking */
> + if (p->state == TASK_WAKING)
> + sched_ttwu_pending_locked(rq_src);
> +
> /*
> * If we're not on a rq, the next wake-up will ensure we're
> * placed properly.
>

--
-- Jason J. Herne (jjherne@linux.vnet.ibm.com)



\
 
 \ /
  Last update: 2014-06-06 15:01    [W:0.137 / U:1.988 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site