[lkml]   [2001]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRE: kernel performance issues 2.4.7 -> 2.4.17-pre8
    Thanks for the feed back. 
    Here are my latest results re-running the same tests with the following
    I also added .17-rc1.

    I did two things :
    the first was :

    echo 70 64 64 256 30000 3000 80 0 0 > /proc/sys/vm/bdflush
    the second was :
    hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda
    following the document at :

    I did see some performance gains, but

    My new questions are :
    Do we(people running Linux) need to do more work on tuning the
    hardware in the current kernels?

    >Note: before running the hdparm test on hda1, you should mount a 4k
    >filesystem onto hda1.
    Where could I find more info on how to do this? Wouldn't changing the
    blocksize of my file system kill my existing data? Or do I just need to
    create some filesystem on the device that has a 4k blocksize? I hate to ask
    a dumb question, but I had not heard of this being done before.



    -----Original Message-----
    From: Andrew Morton []
    Sent: Thursday, December 13, 2001 2:50 PM
    To: Needham, Douglas
    Subject: Re: kernel performance issues 2.4.7 -> 2.4.17-pre8

    "Needham, Douglas" wrote:
    > ...
    > Overall I discovered that the Red Hat modified kernel beat the
    > kernel hands down in throughput. Both the base Red Hat 7.2 kernel and the
    > 7.2 update kernel (2.4.7-9, 2.4.9-13 respectively) had far better
    > than the .10, .15, .14, .16, and .17-pre8 kernels.

    The 60% drop in bonnie throughput going from 2.4.9 to 2.4.10 indicates that
    something strange has happened. This hasn't been observed by others.

    My suspicion would be that something is wrong with the IDE tuning in your
    builds of later kernels. Please check this with `hdparm -t /dev/hda1' -
    sure that these numbers are consistent across kernel versions before you
    even start.

    Note: before running the hdparm test on hda1, you should mount a 4k
    filesystem onto hda1. This changes the softblocksize for the device from 1k
    to 4k and, for some devices, speeds up access to the block device by
    a factor of thirty. This is some bizarro kooky brokenness which the
    2.4.10 patch exposed and I'm still investigating...

    For dbench, errr, just don't bother using it, unless you're using
    a large number of clients - 64 or more. At lower client numbers,
    throughput is enormously dependent upon tiny changes in kernel
    behaviour. Try this:

    echo 70 64 64 256 30000 3000 80 0 0 > /proc/sys/vm/bdflush

    and see the numbers go up greatly.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

    [unhandled content-type:application/octet-stream]
     \ /
      Last update: 2005-03-22 13:14    [W:0.028 / U:7.332 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site