lkml.org 
[lkml]   [2002]   [Aug]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Multi disk performance (8 disks), limit 230 MB/s
    Roman Dementiev wrote:
    >
    > Hi all,
    >
    > I have been doing some benchmarking experiments on kernel 2.4.19 with 8
    > IDE disks. Due to poor performance of 2 disks at 1 IDE channels we have
    > bought 4 Promise Ultra 100 TX2 (32-bit 66 Mhz) controllers and to avoid
    > bus saturation Supermicro P4PDE motherboard with multiple PCI buses
    > (64-bit 66 Mhz) and 2-Xeons. I submitted already PCI slot placing
    > problems to the mailing list. But theoretically I can live with the
    > current IDE condrollers->PCI slots assighnment.
    >
    > The assignment is the following: 3 IDE controllers are connected to the
    > one PCI 64-bit Hub with bandwidth 1 GByte/s and 4th controller is on
    > another hub with the same characteristics.
    >
    > Theoretically with 6 IBM disks (47 MB/s from the first cylinders) I
    > should achieve a number about 266 MB/s (32 bit X 66 Mhz) < 6*47. AND
    > 2*47 = 94 MB/s < 266 MB/s from the last two disks. Thus the rate should
    > be 94 + 266 = 360 MB/s.
    >
    > BUT no matter from which set of the disks I read or write I have got the
    >
    > following parallel read/write rates (raw access):
    >
    > write (MB/s) read (MB/s) systime (top) real/user/sys(time) (s)
    >
    > 1 disk : 48 45 3 % 3.0 / 0.1 / 0.4
    > 2 disks: 83 94 10 % 3.5 / 0.1 / 0.6
    > 4 disks: 131 189 21 % 4.3 / 0.4 / 2.8
    > 5 disks: 172 233 4.5 / 0.5 / 4.5
    > 6 disks: 197 234 ? 30 % 5.2 / 0.6 / 6.6
    > 7 disks: 209 ? 230 ? 5.9 / 0.6 / 8.8
    > 8 disks: 214 ? 229 ? 40 % 6.7 / 0.8 /10.8
    >

    raw access in 2.4 isn't very good - it uses 512-byte chunks. If
    you can hunt down the `rawvary' patch that might help, but I don't
    know if it works against IDE.

    Testing 2.5 would be interesting ;)

    Try the same test with O_DIRECT reads or writes against ext2 filesystems.
    That will use 4k blocks.

    Be sceptical about the `top' figures. They're just statistical and
    are subject to gross errors under some circumstances - you may have
    run out of CPU (unlikely with direct IO).

    direct IO is synchronous: the kernel waits for completion of each
    IO request before submitting the next. You _must_ submit large
    IO's. That means passing 128k or more to each invokation of the
    read and write system calls. See the bandwidth-versus-request size
    numbers I measured for O_DIRECT ext3:
    http://www.geocrawler.com/lists/3/SourceForge/7493/0/9329947/
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:28    [W:0.051 / U:25.868 seconds]
    ©2003-2017 Jasper Spaans. hosted at Digital OceanAdvertise on this site