lkml.org 
[lkml]   [2010]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 00/13] IO-less dirty throttling
On Wed, Nov 17, 2010 at 05:59:00PM -0800, Andrew Morton wrote:
> On Thu, 18 Nov 2010 12:40:51 +1100 Dave Chinner <david@fromorbit.com> wrote:
> > Yeah, sorry, should have posted them - I didn't because I snapped
> > the numbers before the run had finished. Without series:
> >
> > 373.19user 14940.49system 41:42.17elapsed 612%CPU (0avgtext+0avgdata 82560maxresident)k
> > 0inputs+0outputs (403major+2599763minor)pagefaults 0swaps
> >
> > With your series:
> >
> > 359.64user 5559.32system 40:53.23elapsed 241%CPU (0avgtext+0avgdata 82496maxresident)k
> > 0inputs+0outputs (312major+2598798minor)pagefaults 0swaps
> >
> > So the wall time with your series is lower, and system CPU time is
> > way down (as I've already noted) for this workload on XFS.
>
> How much of that benefit is an accounting artifact, moving work away
> from the calling process's CPU and into kernel threads?

As I spelled out in my original results, the sustained CPU usage for
the unmodified kernel is ~780% - 620% fs_mark, 80% bdi-flusher, 80%
kswapd (i.e. completely CPU bound on the 8p test VM). With this
series, the sustained CPU usage is about 380% - 250% fs_mark, 80%
bdi-flusher, 50% kswapd.

IOWs, this series _halved_ the total sustained CPU usage even after
taking into account all the kernel threads. With wall time also
being reduced and the number of IOs issued dropping by 25%, I find
it hard to classify the result as anything other than spectacular...

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com


\
 
 \ /
  Last update: 2010-11-19 03:31    [W:0.051 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site