lkml.org 
[lkml]   [2010]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 1/6] fs: icache RCU free inodes
On Tue, Nov 16, 2010 at 02:49:06PM +1100, Nick Piggin wrote:
> On Tue, Nov 16, 2010 at 02:02:43PM +1100, Dave Chinner wrote:
> > On Mon, Nov 15, 2010 at 03:21:00PM +1100, Nick Piggin wrote:
> > > This is 30K inodes per second per CPU, versus nearly 800K per second
> > > number that I measured the 12% slowdown with. About 25x slower.
> >
> > Hi Nick, the ramfs (800k/12%) numbers are not the context I was
> > responding to - you're comparing apples to oranges. I was responding to
> > the "XFS [on a ramdisk] is about 4.9% slower" result.
>
> Well xfs on ramdisk was (85k/4.9%).

How many threads? On a 2.26GHz nehalem-class Xeon CPU, I'm seeing:

threads files/s
1 45k
2 70k
4 130k
8 230k

With scalability mainly limited by the dcache_lock. I'm not sure
what you 85k number relates to in the above chart. Is it a single
thread number, or something else? If it is a single thread, can you
run you numbers again with a thread per CPU?

> A a lower number, like 30k, I would
> expect that should be around 1-2% perhaps. And when in the context of a
> real workload that is not 100% CPU bound on creating and destroying a
> single inode, I expect that to be well under 1%.

I don't think we are comparing apples to apples. I cannot see how you
can get mainline XFS to sustain 85kfiles/s/cpu across any number of
CPUs, so lets make sure we are comparing the same thing....

> Like I said, I never disputed a potential regression, but I have looked
> for workloads that have a detectable regression and have not found any.
> And I have extrapolated microbenchmark numbers to show that it's not
> going to be a _big_ problem even in a worst case scenario.

How did you extrapolate the numbers?

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com


\
 
 \ /
  Last update: 2010-11-17 02:15    [W:0.126 / U:0.600 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site