[lkml]   [2007]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Understanding I/O behaviour - next try
    On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
    > The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
    > has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
    > The performance of the block device with O_DIRECT is about 90 MB/sec.
    > The problematic behaviour comes when we are moving large files through
    > the system. The file usage in this case is mostly "use once" or
    > streaming. As soon as the amount of file data is larger than 7.5 GB, we
    > see occasional unresponsiveness of the system (e.g. no more ssh
    > connections into the box) of more than 1 or 2 minutes (!) duration
    > (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads and
    > some other poor guys being in "D" state.
    > Just by chance I found out that doing all I/O inc sync-mode does
    > prevent the load from going up. Of course, I/O throughput is not
    > stellar (but not much worse than the non-O_DIRECT case). But the
    > responsiveness seem OK. Maybe a solution, as this can be controlled via
    > mount (would be great for O_DIRECT :-).
    > In general 2.6.22 seems to bee better that 2.6.19, but this is highly
    > subjective :-( I am using the following setting in /proc. They seem to
    > provide the smoothest responsiveness:
    > vm.dirty_background_ratio = 1
    > vm.dirty_ratio = 1
    > vm.swappiness = 1
    > vm.vfs_cache_pressure = 1

    You are apparently running into the sluggish kupdate-style writeback
    problem with large files: huge amount of dirty pages are getting
    accumulated and flushed to the disk all at once when dirty background
    ratio is reached. The current -mm tree has some fixes for it, and
    there are some more in my tree. Martin, I'll send you the patch if
    you'd like to try it out.

    > Another thing I saw during my tests is that when writing to NFS, the
    > "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
    > or a bug?

    What are the nr_unstable numbers?


    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2007-08-29 03:41    [W:0.023 / U:14.456 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site