lkml.org 
[lkml]   [2007]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: SLUB performance regression vs SLAB
On Fri, Oct 05 2007, David Chinner wrote:
> On Thu, Oct 04, 2007 at 03:07:18PM -0700, David Miller wrote:
> > From: Chuck Ebbert <cebbert@redhat.com> Date: Thu, 04 Oct 2007 17:47:48
> > -0400
> >
> > > On 10/04/2007 05:11 PM, David Miller wrote:
> > > > From: Chuck Ebbert <cebbert@redhat.com> Date: Thu, 04 Oct 2007 17:02:17
> > > > -0400
> > > >
> > > >> How do you simulate reading 100TB of data spread across 3000 disks,
> > > >> selecting 10% of it using some criterion, then sorting and summarizing
> > > >> the result?
> > > >
> > > > You repeatedly read zeros from a smaller disk into the same amount of
> > > > memory, and sort that as if it were real data instead.
> > >
> > > You've just replaced 3000 concurrent streams of data with a single stream.
> > > That won't test the memory allocator's ability to allocate memory to many
> > > concurrent users very well.
> >
> > You've kindly removed my "thinking outside of the box" comment.
> >
> > The point is was not that my specific suggestion would be perfect, but that
> > if you used your creativity and thought in similar directions you might find
> > a way to do it.
> >
> > People are too narrow minded when it comes to these things, and that's the
> > problem I want to address.
>
> And it's a good point, too, because often problems to one person are a
> no-brainer to someone else.
>
> Creating lots of "fake" disks is trivial to do, IMO. Use loopback on
> sparse files containing sparse filesxi, use ramdisks containing sparse
> files or write a sparse dm target for sparse block device mapping,
> etc. I'm sure there's more than the few I just threw out...

Or use scsi_debug to fake drives/controllers, works wonderful as well
for some things and involve the full IO stack.

I'd like to second Davids emails here, this is a serious problem. Having
a reproducible test case lowers the barrier for getting the problem
fixed by orders of magnitude. It's the difference between the problem
getting fixed in a day or two and it potentially lingering for months,
because email ping-pong takes forever and "the test team has moved on to
other tests, we'll let you know the results of test foo in 3 weeks time
when we have a new slot on the box" just removing any developer
motivation to work on the issue.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-10-05 08:49    [W:0.099 / U:1.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site