lkml.org 
[lkml]   [2015]   [Aug]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCH 1/4] sched: Fix a race between __kthread_bind() and sched_setaffinity()
On Fri, Aug 07, 2015 at 04:27:08PM +0200, Peter Zijlstra wrote:
> Which is the rescue thread attaching itself to a pool that needs help,
> and obviously the rescue thread isn't new so kthread_bind doesn't work
> right.
>
> The best I could come up with is something like the below on top; does
> that work for you? I'll go give it some runtime.
>
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1622,11 +1622,15 @@ static struct worker *alloc_worker(int n
> * cpu-[un]hotplugs.
> */
> static void worker_attach_to_pool(struct worker *worker,
> - struct worker_pool *pool)
> + struct worker_pool *pool,
> + bool new)
> {
> mutex_lock(&pool->attach_mutex);
>
> - kthread_bind_mask(worker->task, pool->attrs->cpumask);
> + if (new)
> + kthread_bind_mask(worker->task, pool->attrs->cpumask);
> + else
> + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
>
> /*
> * The pool->attach_mutex ensures %POOL_DISASSOCIATED remains
> @@ -1712,7 +1716,7 @@ static struct worker *create_worker(stru
> set_user_nice(worker->task, pool->attrs->nice);
>
> /* successful, attach the worker to the pool */
> - worker_attach_to_pool(worker, pool);
> + worker_attach_to_pool(worker, pool, true);
>
> /* start the newly created worker */
> spin_lock_irq(&pool->lock);
> @@ -2241,7 +2245,7 @@ static int rescuer_thread(void *__rescue
>
> spin_unlock_irq(&wq_mayday_lock);
>
> - worker_attach_to_pool(rescuer, pool);
> + worker_attach_to_pool(rescuer, pool, false);

Hmmm... the race condition didn't exist for workqueue in the first
place, right? As long as the flag is set before the affinity is
configured, there's no race condition. I think the code was better
before. Can't we just revert workqueue.c part?

Thanks.

--
tejun


\
 
 \ /
  Last update: 2015-08-07 17:21    [W:0.153 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site