lkml.org 
[lkml]   [2013]   [Oct]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/6] Optimize the cpu hotplug locking -v2

* Andrew Morton <akpm@linux-foundation.org> wrote:

> On Thu, 10 Oct 2013 08:27:41 +0200 Ingo Molnar <mingo@kernel.org> wrote:
>
> > * Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > > On Tue, 08 Oct 2013 12:25:05 +0200 Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > > The current cpu hotplug lock is a single global lock; therefore
> > > > excluding hotplug is a very expensive proposition even though it is
> > > > rare occurrence under normal operation.
> > > >
> > > > There is a desire for a more light weight implementation of
> > > > {get,put}_online_cpus() from both the NUMA scheduling as well as the
> > > > -RT side.
> > > >
> > > > The current hotplug lock is a full reader preference lock -- and thus
> > > > supports reader recursion. However since we're making the read side
> > > > lock much cheaper it is the expectation that it will also be used far
> > > > more. Which in turn would lead to writer starvation.
> > > >
> > > > Therefore the new lock proposed is completely fair; albeit somewhat
> > > > expensive on the write side. This in turn means that we need a
> > > > per-task nesting count to support reader recursion.
> > >
> > > This is a lot of code and a lot of new complexity. It needs some pretty
> > > convincing performance numbers to justify its inclusion, no?
> >
> > Should be fairly straightforward to test: the sys_sched_getaffinity()
> > and sys_sched_setaffinity() syscalls both make use of
> > get_online_cpus()/put_online_cpus(), so a testcase frobbing affinities
> > on N CPUs in parallel ought to demonstrate scalability improvements
> > pretty nicely.
>
> Well, an in-kernel microbenchmark which camps in a loop doing get/put
> would measure this as well.
>
> But neither approach answers the question "how useful is this patchset".

Even ignoring all the other reasons cited, sys_sched_getaffinity() /
sys_sched_setaffinity() are prime time system calls, and as long as the
patches are correct, speeding them up is worthwhile.

Thanks,

Ingo


\
 
 \ /
  Last update: 2013-10-10 10:01    [W:0.145 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site