lkml.org 
[lkml]   [2011]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [performance bug] kernel building regression on 64 LCPUs machine
    From
    Date

    On Sat, 2011-03-05 at 02:27 +0800, Jeff Moyer wrote:
    > Jeff Moyer <jmoyer@redhat.com> writes:
    >
    > > Jan Kara <jack@suse.cz> writes:
    > >
    > >> I'm not so happy with ext4 results. The difference between ext3 and ext4
    > >> might be that amount of data written by kjournald in ext3 is considerably
    > >> larger if it ends up pushing out data (because of data=ordered mode) as
    > >> well. With ext4, all data are written by filemap_fdatawrite() from fsync
    > >> because of delayed allocation. And thus maybe for ext4 WRITE_SYNC_PLUG
    > >> is hurting us with your fast storage and small amount of written data? With
    > >> WRITE_SYNC, data would be already on it's way to storage before we get to
    > >> wait for them...
    > >
    > >> Or it could be that we really send more data in WRITE mode rather than in
    > >> WRITE_SYNC mode with the patch on ext4 (that should be verifiable with
    > >> blktrace). But I wonder how that could happen...
    > >
    > > It looks like this is the case, the I/O isn't coming down as
    > > synchronous. I'm seeing a lot of writes, very few write sync's, which
    > > means that the write stream will be preempted by the incoming reads.
    > >
    > > Time to audit that fsync path and make sure it's marked properly, I
    > > guess.
    >
    > OK, I spoke too soon. Here's the blktrace summary information (I re-ran
    > the tests using 3 samples, the blktrace is from the last run of the
    > three in each case):
    >
    > Vanilla
    > -------
    > fs_mark: 307.288 files/sec
    > fio: 286509 KB/s
    >
    > Total (sde):
    > Reads Queued: 341,558, 84,994MiB Writes Queued: 1,561K, 6,244MiB
    > Read Dispatches: 341,493, 84,994MiB Write Dispatches: 648,046, 6,244MiB
    > Reads Requeued: 0 Writes Requeued: 27
    > Reads Completed: 341,491, 84,994MiB Writes Completed: 648,021, 6,244MiB
    > Read Merges: 65, 2,780KiB Write Merges: 913,076, 3,652MiB
    > IO unplugs: 578,102 Timer unplugs: 0
    >
    > Throughput (R/W): 282,797KiB/s / 20,776KiB/s
    > Events (sde): 16,724,303 entries
    >
    > Patched
    > -------
    > fs_mark: 278.587 files/sec
    > fio: 298007 KB/s
    >
    > Total (sde):
    > Reads Queued: 345,407, 86,834MiB Writes Queued: 1,566K, 6,264MiB
    > Read Dispatches: 345,391, 86,834MiB Write Dispatches: 327,404, 6,264MiB
    > Reads Requeued: 0 Writes Requeued: 33
    > Reads Completed: 345,391, 86,834MiB Writes Completed: 327,371, 6,264MiB
    > Read Merges: 16, 1,576KiB Write Merges: 1,238K, 4,954MiB
    > IO unplugs: 580,308 Timer unplugs: 0
    >
    > Throughput (R/W): 288,771KiB/s / 20,832KiB/s
    > Events (sde): 14,030,610 entries
    >
    > So, it appears we flush out writes much more aggressively without the
    > patch in place. I'm not sure why the write bandwidth looks to be higher
    > in the patched case... odd.
    >

    Jan:
    Do you have new idea on this?



    \
     
     \ /
      Last update: 2011-03-22 08:39    [W:2.384 / U:0.192 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site