lkml.org 
[lkml]   [2015]   [Aug]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 0/3] timer: Improve itimers scalability
From
Date
On Wed, 2015-08-26 at 19:08 +0200, Oleg Nesterov wrote:
> On 08/26, Jason Low wrote:
> >
> > Hi Andrew,
> >
> > On Tue, 2015-08-25 at 20:27 -0700, Andrew Morton wrote:
> > > On Tue, 25 Aug 2015 20:17:45 -0700 Jason Low <jason.low2@hp.com> wrote:
> > >
> > > > When running a database workload on a 16 socket machine, there were
> > > > scalability issues related to itimers.
> > > >
> > > > Commit 1018016c706f addressed the issue with the thread_group_cputimer
> > > > spinlock taking up a significant portion of total run time.
> > > >
> > > > This patch series address the other issue where a lot of time is spent
> > > > trying to acquire the sighand lock. It was found in some cases that
> > > > 200+ threads were simultaneously contending for the same sighand lock,
> > > > reducing throughput by more than 30%.
> > >
> > > Does this imply that the patchset increased the throughput of this
> > > workload by 30%?
> > >
> > > And is this test case realistic? If not, what are the benefits on a
> > > real-world workload?
> >
> > Yes, the test case with the database workload is realistic.
>
> Can't resists, sorry... to me the very idea to use the process wide posix-
> cpu-timers on performance critical application doesn't look realistic ;)

I will let Hideaki elaborate more regarding the issue at the application
level.

> However, I thinks the patches are fine.
>
>
> Reviewed-by: Oleg Nesterov <oleg@redhat.com>

Thanks for reviewing!



\
 
 \ /
  Last update: 2015-08-27 00:21    [W:0.067 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site