lkml.org 
[lkml]   [2015]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3] sched: fix tsk->pi_lock isn't held when do_set_cpus_allowed()
From
Date
Ping Ingo, ;-)
On 8/28/15 9:29 PM, Peter Zijlstra wrote:
> On Fri, Aug 28, 2015 at 02:55:56PM +0800, Wanpeng Li wrote:
>> This patch fix it by following the rules for changing task_struct::cpus_allowed
>> w/ both pi_lock and rq->lock are held.
> Thanks, I made that the below. There was a pin leak and I turned the
> safety check into a WARN_ON because it really should not happen.
>
> I also munged some of the comments a bit and did some slight edits to
> the Changelog.
>
> ---
> Subject: sched: 'Annotate' migrate_tasks()
> From: Wanpeng Li <wanpeng.li@hotmail.com>
> Date: Fri, 28 Aug 2015 14:55:56 +0800
>
> | WARNING: CPU: 0 PID: 13 at kernel/sched/core.c:1156 do_set_cpus_allowed+0x7e/0x80()
> | Modules linked in:
> | CPU: 0 PID: 13 Comm: migration/0 Not tainted 4.2.0-rc1-00049-g25834c7 #2
> | Call Trace:
> | dump_stack+0x4b/0x75
> | warn_slowpath_common+0x8b/0xc0
> | warn_slowpath_null+0x22/0x30
> | do_set_cpus_allowed+0x7e/0x80
> | cpuset_cpus_allowed_fallback+0x7c/0x170
> | select_fallback_rq+0x221/0x280
> | migration_call+0xe3/0x250
> | notifier_call_chain+0x53/0x70
> | __raw_notifier_call_chain+0x1e/0x30
> | cpu_notify+0x28/0x50
> | take_cpu_down+0x22/0x40
> | multi_cpu_stop+0xd5/0x140
> | cpu_stopper_thread+0xbc/0x170
> | smpboot_thread_fn+0x174/0x2f0
> | kthread+0xc4/0xe0
> | ret_from_kernel_thread+0x21/0x30
>
> As Peterz pointed out:
>
> | So the normal rules for changing task_struct::cpus_allowed are holding
> | both pi_lock and rq->lock, such that holding either stabilizes the mask.
> |
> | This is so that wakeup can happen without rq->lock and load-balance
> | without pi_lock.
> |
> | From this we already get the relaxation that we can omit acquiring
> | rq->lock if the task is not on the rq, because in that case
> | load-balancing will not apply to it.
> |
> | ** these are the rules currently tested in do_set_cpus_allowed() **
> |
> | Now, since __set_cpus_allowed_ptr() uses task_rq_lock() which
> | unconditionally acquires both locks, we could get away with holding just
> | rq->lock when on_rq for modification because that'd still exclude
> | __set_cpus_allowed_ptr(), it would also work against
> | __kthread_bind_mask() because that assumes !on_rq.
> |
> | That said, this is all somewhat fragile.
> |
> | Now, I don't think dropping rq->lock is quite as disastrous as it
> | usually is because !cpu_active at this point, which means load-balance
> | will not interfere, but that too is somewhat fragile.
> |
> | So we end up with a choice of two fragile..
>
> This patch fixes it by following the rules for changing
> task_struct::cpus_allowed with both pi_lock and rq->lock held.
>
> Cc: Ingo Molnar <mingo@kernel.org>
> Reported-by: kernel test robot <ying.huang@intel.com>
> Reported-by: Sasha Levin <sasha.levin@oracle.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
> [Modified changelog and patch]
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Link: http://lkml.kernel.org/r/BLU436-SMTP1660820490DE202E3934ED3806E0@phx.gbl
> ---
>
> kernel/sched/core.c | 29 ++++++++++++++++++++++++++---
> 1 file changed, 26 insertions(+), 3 deletions(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5178,24 +5178,47 @@ static void migrate_tasks(struct rq *dea
> break;
>
> /*
> - * Ensure rq->lock covers the entire task selection
> - * until the migration.
> + * pick_next_task assumes pinned rq->lock.
> */
> lockdep_pin_lock(&rq->lock);
> next = pick_next_task(rq, &fake_task);
> BUG_ON(!next);
> next->sched_class->put_prev_task(rq, next);
>
> + /*
> + * Rules for changing task_struct::cpus_allowed are holding
> + * both pi_lock and rq->lock, such that holding either
> + * stabilizes the mask.
> + *
> + * Drop rq->lock is not quite as disastrous as it usually is
> + * because !cpu_active at this point, which means load-balance
> + * will not interfere. Also, stop-machine.
> + */
> + lockdep_unpin_lock(&rq->lock);
> + raw_spin_unlock(&rq->lock);
> + raw_spin_lock(&next->pi_lock);
> + raw_spin_lock(&rq->lock);
> +
> + /*
> + * Since we're inside stop-machine, _nothing_ should have
> + * changed the task, WARN if weird stuff happened, because in
> + * that case the above rq->lock drop is a fail too.
> + */
> + if (WARN_ON(task_rq(next) != rq || !task_on_rq_queued(next))) {
> + raw_spin_unlock(&next->pi_lock);
> + continue;
> + }
> +
> /* Find suitable destination for @next, with force if needed. */
> dest_cpu = select_fallback_rq(dead_rq->cpu, next);
>
> - lockdep_unpin_lock(&rq->lock);
> rq = __migrate_task(rq, next, dest_cpu);
> if (rq != dead_rq) {
> raw_spin_unlock(&rq->lock);
> rq = dead_rq;
> raw_spin_lock(&rq->lock);
> }
> + raw_spin_unlock(&next->pi_lock);
> }
>
> rq->stop = stop;



\
 
 \ /
  Last update: 2015-09-01 08:21    [W:0.359 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site