lkml.org 
[lkml]   [2002]   [Jun]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 3x slower file reading oddity
On Mon, 17 Jun 2002, Andrew Morton wrote:

> dean gaudet wrote:
> > what if you have a disk array with lots of spindles? it seems at some
> > point that you need to give the array or some lower level driver a lot of
> > i/os to choose from so that it can get better parallelism out of the
> > hardware.
>
> mm. For that particular test, you'd get nice speedups from striping
> the blockgroups across disks, so each `cat' is probably talking to
> a different disk. I don't think I've seen anything like that proposed
> though.

heh, a 128MB stripe? that'd be huge :)

> You could fork one `cat' per file ;) (Not so silly, really. But if
> you took this approach, you'd need "many" more threads than blockgroups).

i actually tried this first :) the problem then becomes a fork()
bottleneck before you run into the disk bottlenecks. iirc the numbers
were ~45s for the 1-file-per-cat (for any -Pn, n<=10), ~30s for
100-files-per-cat (-P1) and ~1m15s for 100-files-per-cat (-P2).

> hmm. What else? Physical readahead - read metadata into the block
> device's pagecache and flip pages from there into directories and
> files on-demand. Fat chance of that happening.

one idea i had -- given that the server has a volume manager and you're
working from a snapshot volume anyhow (only sane way to do backups), it
might make a lot more sense to use userland ext2/3 libraries to read the
snapshot block device anyhow. but this kind of makes me cringe :)

-dean




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:26    [W:0.112 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site