lkml.org 
[lkml]   [2000]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Degrading disk read performance under 2.2.16
    Corin Hartland-Swann wrote:
    >
    > Hi Andre,
    >
    > The revised comparison between 2.2.15 and 2.4.0-test5 are as follows:
    >
    > ==> 2.2.15 <==
    >
    > Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%)
    > ----- ------ ------- ---- ------------- -------------- --------------
    > /mnt/ 256 4096 1 27.1371 10.3% 26.7979 23.0% 146.187 0.95%
    > /mnt/ 256 4096 2 27.1219 10.7% 26.6606 23.2% 142.233 0.60%
    > /mnt/ 256 4096 4 26.9915 10.6% 26.4289 22.9% 142.789 0.50%
    > /mnt/ 256 4096 16 26.4320 10.5% 26.1310 23.0% 147.424 0.52%
    > /mnt/ 256 4096 32 25.3407 10.1% 25.6822 22.7% 150.750 0.57%
    >
    > ==> 2.4.0-test5 <==
    >
    > Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%)
    > ----- ------ ------- ---- ------------- -------------- --------------
    > /mnt/ 256 8192 1 23.4496 9.70% 24.1711 20.6% 139.941 0.88%
    > /mnt/ 256 8192 2 16.9398 7.53% 24.0482 20.3% 136.706 0.69%
    > /mnt/ 256 8192 4 15.0166 6.82% 23.7892 20.2% 139.922 0.69%
    > /mnt/ 256 8192 16 13.5901 6.38% 23.2326 19.4% 147.956 0.70%
    > /mnt/ 256 8192 32 13.3228 6.36% 22.8210 19.0% 151.544 0.73%
    >
    > So we're still seeing a drop in performance with 1 thread, and still
    > seeing the same severe degradation 2.2.16 exhibits.
    >
    >
    > Thanks,
    >
    > Corin
    >

    Hi, motivated by your earlier comparison between 2.2.15 and 2.2.16 and
    the possibility that the new elevator might be causing the slowdown i
    did some benchmarks of my own.

    My conclusion is that the new elevator isnt causing a slowdown, but it
    needs to be tuned.

    The results for my raid0 array shows that 2.2.16 is a few % slower than
    2.2.15, but maybe that could be overcome with more experimentation with
    elvtune.

    The tiobench option *--nofrag* made a lot of difference as the number of
    threads increased, without --nofrag the performance drops off a lot
    faster, if you didnt use it on your tests i would be interested to see
    if you have the same experience.

    /mnt1 is a Quantum Fireball Plus KA18.2
    /mnt2 is a Quantum Fireball Plus KX20.5
    /mnt3 and /mnt4 are both IBM-DPTA-372050

    All are on there own channel in udma66 mode via two Promise PDC20262
    cards.
    The machine is a dual 433 celeron with 64MB ram

    Both 2.2.15 and 2.2.16 have the latest ide and raid patches applied.
    For 2.2.16 i settled on using elvtune -r 100000000 -w 100000000 -b 128.

    I experimented with a few different elevator values, it seems that by
    decreasing -b on the elevator the speed of a single thread increases,
    but the speed degrades faster as the number of threads increases.
    Anyone have good knowledge on tuning the elevator ?


    tiobench.pl --nofrag --numruns 5 --dir //mnt1 --dir //mnt2 --dir //mnt3
    --dir //mnt4 --threads 1 --threads 2 --threads 4 --threads 8 --threads
    16 --threads 32

    For 2.2.15 i get the following

    Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

    File Block Num Seq Read Rand Read Seq Write Rand
    Write
    Dir Size Size Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
    (CPU%)
    ------- ------ ------- --- ----------- ----------- -----------
    -----------
    //mnt1 200 4096 1 18.88 14.0% 0.748 1.60% 18.89 25.2% 0.985
    1.85%
    //mnt1 200 4096 2 18.77 13.8% 0.750 1.34% 18.78 24.8% 0.960
    1.86%
    //mnt1 200 4096 4 18.65 13.8% 0.750 1.15% 18.75 25.2% 0.960
    2.12%
    //mnt1 200 4096 8 18.84 14.3% 0.754 1.07% 18.57 25.2% 0.955
    2.37%
    //mnt1 200 4096 16 19.00 15.2% 0.767 1.03% 18.27 24.8% 0.955
    2.70%
    //mnt1 200 4096 32 18.89 15.8% 0.778 1.06% 17.86 24.1% 0.954
    3.96%

    //mnt2 200 4096 1 19.22 15.7% 0.779 1.08% 18.27 24.7% 0.979
    3.74%
    //mnt2 200 4096 2 19.44 15.7% 0.782 1.06% 18.58 25.1% 0.988
    3.56%
    //mnt2 200 4096 4 19.58 15.8% 0.780 1.03% 18.85 25.6% 1.001
    3.52%
    //mnt2 200 4096 8 19.79 15.9% 0.779 1.02% 18.98 25.8% 1.010
    3.49%
    //mnt2 200 4096 16 19.95 16.3% 0.784 1.02% 19.01 25.8% 1.023
    3.65%
    //mnt2 200 4096 32 20.00 16.8% 0.791 1.03% 18.96 25.7% 1.032
    4.20%

    //mnt3 200 4096 1 20.09 16.8% 0.779 1.04% 18.70 25.2% 1.048
    4.10%
    //mnt3 200 4096 2 20.14 16.7% 0.769 1.03% 18.47 24.9% 1.055
    3.99%
    //mnt3 200 4096 4 20.17 16.7% 0.759 1.00% 18.28 24.6% 1.065
    3.97%
    //mnt3 200 4096 8 20.23 16.8% 0.753 0.98% 18.10 24.4% 1.075
    3.95%
    //mnt3 200 4096 16 20.27 16.9% 0.751 0.99% 17.93 24.2% 1.089
    4.03%
    //mnt3 200 4096 32 20.27 17.2% 0.754 1.00% 17.74 23.9% 1.101
    4.41%

    //mnt4 200 4096 1 20.34 17.1% 0.748 1.01% 17.91 24.1% 1.112
    4.34%
    //mnt4 200 4096 2 20.39 17.1% 0.743 1.00% 18.05 24.3% 1.120
    4.28%
    //mnt4 200 4096 4 20.41 17.0% 0.739 0.99% 18.19 24.5% 1.127
    4.25%
    //mnt4 200 4096 8 20.47 17.1% 0.735 0.98% 18.30 24.7% 1.134
    4.25%
    //mnt4 200 4096 16 20.52 17.2% 0.735 0.97% 18.37 24.8% 1.145
    4.33%
    //mnt4 200 4096 32 20.52 17.4% 0.736 0.98% 18.41 24.9% 1.153
    4.66%

    For 2.2.16 i get

    Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

    File Block Num Seq Read Rand Read Seq Write Rand
    Write
    Dir Size Size Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
    (CPU%)
    ------- ------ ------- --- ----------- ----------- -----------
    -----------
    //mnt1 200 4096 1 18.97 15.2% 0.743 0.98% 18.89 25.3% 0.962
    2.03%
    //mnt1 200 4096 2 18.86 15.4% 0.736 0.94% 18.72 25.4% 0.961
    2.05%
    //mnt1 200 4096 4 18.78 15.3% 0.735 0.91% 18.69 25.8% 0.950
    2.32%
    //mnt1 200 4096 8 18.96 16.1% 0.740 0.96% 18.45 25.9% 0.946
    2.64%
    //mnt1 200 4096 16 19.10 17.2% 0.754 1.01% 18.08 25.5% 0.950
    3.23%
    //mnt1 200 4096 32 19.06 17.8% 0.767 1.05% 17.75 24.9% 0.951
    4.20%

    //mnt2 200 4096 1 19.36 17.8% 0.768 1.07% 18.17 25.4% 0.973
    4.02%
    //mnt2 200 4096 2 19.55 17.8% 0.773 1.05% 18.46 25.7% 0.991
    3.88%
    //mnt2 200 4096 4 19.68 17.9% 0.772 1.03% 18.70 26.1% 1.003
    3.84%
    //mnt2 200 4096 8 19.87 18.0% 0.772 1.01% 18.84 26.6% 1.009
    3.85%
    //mnt2 200 4096 16 20.04 18.6% 0.776 1.04% 18.84 26.6% 1.022
    4.04%
    //mnt2 200 4096 32 20.12 19.1% 0.784 1.09% 18.77 26.5% 1.031
    4.60%

    //mnt3 200 4096 1 20.19 18.9% 0.773 1.08% 18.50 26.0% 1.051
    4.51%
    //mnt3 200 4096 2 20.24 18.7% 0.763 1.06% 18.24 25.6% 1.065
    4.40%
    //mnt3 200 4096 4 20.26 18.6% 0.754 1.04% 18.03 25.3% 1.073
    4.32%
    //mnt3 200 4096 8 20.33 18.6% 0.748 1.04% 17.84 25.1% 1.079
    4.34%
    //mnt3 200 4096 16 20.40 18.9% 0.747 1.04% 17.67 24.9% 1.092
    4.45%
    //mnt3 200 4096 32 20.40 19.1% 0.749 1.07% 17.48 24.6% 1.103
    4.87%

    //mnt4 200 4096 1 20.46 19.0% 0.744 1.07% 17.66 24.8% 1.114
    4.77%
    //mnt4 200 4096 2 20.50 18.9% 0.739 1.06% 17.81 25.0% 1.123
    4.70%
    //mnt4 200 4096 4 20.52 18.8% 0.734 1.05% 17.96 25.2% 1.130
    4.66%
    //mnt4 200 4096 8 20.58 18.8% 0.731 1.04% 18.06 25.4% 1.138
    4.66%
    //mnt4 200 4096 16 20.64 19.0% 0.731 1.04% 18.15 25.6% 1.148
    4.78%
    //mnt4 200 4096 32 20.65 19.2% 0.733 1.05% 18.20 25.7% 1.157
    5.08%


    As can be seen from these results there is no slowdown as threads
    increase using the --nofrag option.

    I used the same partitions on the same drives as tested above and put
    them in a 4 way raid0 with a chunk size of 16K, i also added --blocksize
    16384 to tiobench.pl to test with the same blocksize and chunk size.

    In 2.2.15 i get

    File Block Num Seq Read Rand Read Seq Write Rand
    Write
    Dir Size Size Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
    (CPU%)
    ------- ------ ------- --- ----------- ----------- -----------
    -----------
    //mnt 200 16384 1 62.58 61.7% 2.497 3.00% 33.47 45.6% 7.619
    10.6%
    //mnt 200 16384 2 58.14 64.3% 2.968 3.43% 33.13 45.4% 8.173
    12.6%
    //mnt 200 16384 4 55.46 66.1% 3.385 3.87% 32.79 45.4% 8.326
    14.3%
    //mnt 200 16384 8 53.57 66.6% 3.735 4.38% 32.06 44.6% 8.453
    16.0%
    //mnt 200 16384 16 51.94 68.4% 4.062 4.87% 31.39 43.4% 8.802
    18.0%
    //mnt 200 16384 32 48.95 70.1% 4.365 5.37% 30.62 41.9% 9.179
    20.5%


    For 2.2.16 with elevator -r 100000000 -w 100000000 -b 128, without the
    tiobench.pl i get


    File Block Num Seq Read Rand Read Seq Write Rand
    Write
    Dir Size Size Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
    (CPU%)
    ------- ------ ------- --- ----------- ----------- -----------
    -----------
    //mnt 200 16384 1 57.80 57.4% 2.531 2.89% 33.10 45.6% 7.432
    10.6%
    //mnt 200 16384 2 55.63 60.2% 2.996 3.32% 32.71 45.2% 8.317
    12.8%
    //mnt 200 16384 4 53.06 62.6% 3.400 3.77% 32.42 45.1% 8.488
    14.7%
    //mnt 200 16384 8 51.57 64.8% 3.753 4.48% 31.98 44.7% 8.626
    16.6%
    //mnt 200 16384 16 50.10 66.8% 4.082 5.12% 31.25 43.5% 8.963
    18.6%
    //mnt 200 16384 32 47.45 69.8% 4.384 5.69% 30.50 42.2% 9.310
    21.2%


    Without --nofrag in 2.2.15 i get

    File Block Num Seq Read Rand Read Seq Write Rand
    Write
    Dir Size Size Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
    (CPU%)
    ------- ------ ------- --- ----------- ----------- -----------
    -----------
    //mnt 200 16384 1 62.24 59.1% 2.517 2.83% 33.30 45.9% 7.371
    10.2%
    //mnt 200 16384 2 42.50 47.6% 2.918 3.48% 33.56 65.0% 6.687
    12.5%
    //mnt 200 16384 4 38.58 45.0% 3.283 3.93% 33.59 70.2% 6.624
    13.9%
    //mnt 200 16384 8 36.17 44.2% 3.585 4.30% 32.99 73.0% 6.751
    15.0%
    //mnt 200 16384 16 33.22 46.6% 3.847 4.71% 32.34 74.4% 6.991
    16.6%
    //mnt 200 16384 32 29.91 53.0% 4.041 5.08% 28.87 68.9% 7.046
    18.6%

    Without --nofrag in 2.2.16 i get

    File Block Num Seq Read Rand Read Seq Write Rand
    Write
    Dir Size Size Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate
    (CPU%)
    ------- ------ ------- --- ----------- ----------- -----------
    -----------
    //mnt 200 16384 1 57.47 56.7% 2.513 2.30% 33.32 46.3% 7.542
    10.1%
    //mnt 200 16384 2 39.35 47.0% 2.923 2.96% 33.50 65.9% 6.749
    12.4%
    //mnt 200 16384 4 36.28 44.5% 3.326 3.49% 33.56 69.6% 6.759
    13.7%
    //mnt 200 16384 8 34.02 43.8% 3.625 3.99% 33.30 73.5% 6.847
    14.9%
    //mnt 200 16384 16 31.13 46.0% 3.877 4.46% 33.07 76.2% 7.064
    16.5%
    //mnt 200 16384 32 28.21 52.9% 4.073 4.86% 30.81 73.5% 7.070
    18.6%

    Whats the go with Writes being stuck on 33?

    Glenn

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:58    [W:0.037 / U:29.592 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site