lkml.org 
[lkml]   [1997]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: More cluuse, was: Re: Wierd IDE/Triton behavior
    Date
    On Fri, 10 Jan 1997 19:31:56 GMT, you wrote:

    > On Fri, 10 Jan 1997 15:36:09 +0100, you wrote:
    >
    > > Hallo,
    > >
    > > I have investigated more on the effect of decreased speed with EIDE drives
    > > at least with Triton I and II boards on the recent kernel.
    > >
    > > [skipped]
    >
    > I can confirm that something happened after 2.0.22.
    >
    > I said "dd if=/dev/hda of=/dev/null bs=1024k count=100" on a drive with
    > no mounted filesystems using a 2.0.22, a 2.0.27, and a 2.1.20 kernel. I
    > also said "hdparm -tT /dev/hda" as a check.
    >
    > Here are the results:
    >
    > kernel | time for 100MB | MB/s | MB/s (hdparm)
    > ---------+----------------+------+------------------
    > 2.0.22 | 88s | 1.14 | 1.14
    > 2.0.27 | 166s | 0.60 | 0.65
    > 2.1.20 | 87 | 1.15 | 1.16
    > | | |
    >
    > [skipped]
    >

    Further testing reveals that this is probably not a disk problem but
    a memory management one.

    Immediately after booting the 2.0.27 kernel, I can reproduce the slow
    results every time. If I run the system for a while doing something
    like compiling a kernel or running X or generally churning free memory,
    the problem disappears! From then on, the test runs as fast as with the
    other kernels. Interesting.

    The kernel size also seems important. I built a special kernel (with
    profiling support) to test this phenomenon, but it does not exhibit the
    behavior. Other 2.0.27 kernels I have (with different configurations)
    don't show this slowdown.

    Luckily, I found out that you can profile a kernel even it if wasn't
    originally compiled with profiling support. (A definite feature.) One
    very obvious difference between a slow run and a fast one is that
    shrink_specific_buffers is called 6970 times in a slow run but *never*
    in a fast one.

    I know nothing about Linux memory management, but it appears that there
    are certain initial conditions that can cause big performance hits.

    Any experts have any ideas?


    \
     
     \ /
      Last update: 2005-03-22 13:38    [W:0.033 / U:0.356 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site