lkml.org 
[lkml]   [2015]   [Jun]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[RFC PATCH 0/6] stop_machine: kill stop_cpus_mutex and stop_cpus_lock
On 06/25, Peter Zijlstra wrote:
>
> On Tue, Jun 23, 2015 at 07:24:16PM +0200, Oleg Nesterov wrote:
> >
> > lock_stop_cpus_works(cpumask)
> > {
> > for_each_cpu(cpu, cpumask)
> > mutex_lock(per_cpu(cpu_stopper_task, cpu).work_mutex);
> > }
> >
> > unlock_stop_cpus_works(cpumask)
> > {
> > for_each_cpu(cpu, cpumask)
> > mutex_lock(...);
> > }
> >
> > which should be used instead of stop_cpus_mutex. After this change
> > stop_two_cpus() can just use stop_cpus().
>
> Right, lockdep annotating that will be 'interesting' though.

Sure, and this is too inefficient, this is only to explain what
I mean.

How about this series? Untested. For review.

> And
> stop_two_cpus() then has the problem of allocating a cpumask.

Yes, but we can avoid this, see the changelog in 5/6.

> Simpler to
> let it keep 'abuse' the queueing spinlock in there.

Not sure.

And note that this series kills stop_cpus_mutex, so that multiple
stop_cpus()'s / stop_machine()'s can run in parallel if cpumask's
do not overlap.

Note also the changelog in 6/6, we can simplify + optimize this code
a bit more.

What do you think?

Oleg.

include/linux/lglock.h | 5 -
kernel/locking/lglock.c | 22 -----
kernel/stop_machine.c | 197 ++++++++++++++++++++++++++++-------------------
3 files changed, 119 insertions(+), 105 deletions(-)



\
 
 \ /
  Last update: 2015-06-26 04:41    [W:0.164 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site