lkml.org 
[lkml]   [2009]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: Linux 2.6.29
    From
    Date
    On Thu, 2009-04-02 at 20:34 -0700, Linus Torvalds wrote:
    >
    > On Thu, 2 Apr 2009, Jeff Garzik wrote:
    > >
    > > I was really surprised the performance was so high at first, then fell off so
    > > dramatically, on the SSD here.
    >
    > Well, one rather simple explanation is that if you hadn't been doing lots
    > of writes, then the background garbage collection on the Intel SSD gets
    > ahead of the game, and gives you lots of bursty nice write bandwidth due
    > to having a nicely compacted and pre-erased blocks.
    >
    > Then, after lots of writing, all the pre-erased blocks are gone, and you
    > are down to a steady state where it needs to GC and erase blocks to make
    > room for new writes.
    >
    > So that part doesn't suprise me per se. The Intel SSD's definitely
    > flucutate a bit timing-wise (but I love how they never degenerate to the
    > "ooh, that _really_ sucks" case that the other SSD's and the rotational
    > media I've seen does when you do random writes).
    >

    23MB/s seems a bit low though, I'd try with O_DIRECT. ext3 doesn't do
    writepages, and the ssd may be very sensitive to smaller writes (what
    brand?)

    > The fact that it also happens for the regular disk does imply that it's
    > not the _only_ thing going on, though.
    >

    Jeff if you blktrace it I can make up a seekwatcher graph. My bet is
    that pdflush is stuck writing the indirect blocks, and doing a ton of
    seeks.

    You could change the overwrite program to also do sync_file_range on the
    block device ;)

    -chris




    \
     
     \ /
      Last update: 2009-04-03 13:39    [W:4.453 / U:0.488 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site