[lkml]   [2008]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: very poor ext3 write performance on big filesystems?
    On Mon, Feb 18, 2008 at 05:16:55PM +0100, Tomasz Chmielewski wrote:
    > Theodore Tso schrieb:
    >> I'd really need to know exactly what kind of operations you were
    >> trying to do that were causing problems before I could say for sure.
    >> Yes, you said you were removing unneeded files, but how were you doing
    >> it? With rm -r of old hard-linked directories?
    > Yes, with rm -r.

    You should definitely try the spd_readdir hack; that will help reduce
    the seek times. This will probably help on any block group oriented
    filesystems, including XFS, etc.

    >> How big are the
    >> average files involved? Etc.
    > It's hard to estimate the average size of a file. I'd say there are not
    > many files bigger than 50 MB.

    Well, Ext4 will help for files bigger than 48k.

    The other thing that might help for you is using an external journal
    on a separate hard drive (either for ext3 or ext4). That will help
    alleviate some of the seek storms going on, since the journal is
    written to only sequentially, and putting it on a separate hard drive
    will help remove some of the contention on the hard drive.

    I assume that your 1.2 TB filesystem is located on a RAID array; did
    you use the mke2fs -E stride option to make sure all of the bitmaps
    don't get concentrated on one hard drive spindle? One of the failure
    modes which can happen is if you use a 4+1 raid 5 setup, that all of
    the block and inode bitmaps can end up getting laid out on a single
    hard drive, so it becomes a bottleneck for bitmap intensive workloads
    --- including "rm -rf". So that's another thing that might be going
    on. If you do a "dumpe2fs", and look at the block numbers for the
    block and inode allocation bitmaps, and you find that they are are all
    landing on the same physical hard drive, then that's very clearly the
    biggest problem given an "rm -rf" workload. You should be able to see
    this as well visually; if one hard drive has its hard drive light
    almost constantly on, and the other ones don't have much activity,
    that's probably what is happening.

    - Ted

     \ /
      Last update: 2008-02-18 19:47    [W:0.022 / U:5.576 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site