Messages in this thread |  | | Date | Sun, 14 Jul 1996 10:17:21 +0000 (GMT) | From | Gerard Roudier <> | Subject | Bonnie benchmark is strict but unfair |
| |
Here are some Bonnie results: (P133/32MB/NCR53C810/IBMS12), 100 MB file.
-------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 2.0.6 /4k 100 4027 66.4 4371 20.1 1844 15.4 4041 66.2 4252 16.2 90.6 3.4 2.0.6 /1k 100 3938 75.7 3484 20.7 1704 19.2 3546 62.2 4338 20.7 54.9 2.7 1.2.13/4k 100 3659 79.7 4086 43.3 1774 14.7 2864 50.9 4255 17.5 111.0 4.5 1.2.13/1k 100 3394 83.1 3037 33.2 1331 14.0 2366 51.7 3474 24.2 99.7 4.9
(1k means 1K ext2 fs, 4k means 4k ext2 fs)
Donnot try to find some faster driver related to these results. The driver was the same for all benchmarks. Same hardware, same benchmark binary too.
The couple driver/adapter must allow the O/S to operate with peripheral devices with maximum reliability and performances. A driver must not keep the system and peripherals from operating in the best possible conditions.
The speed for linear read / write / rewrite one file at a time is a interesting criteria. However, unless you intend to only play with benchmarks as bonnie (iozone, ...), a relevant valuation of a disk IO subsystem must not be only based on such basic benchmarks. Just try to read 2 files at the same time, and you will observe up to 50 % performance increase with Tagged Command Queuing enabled (if possible).
On the other hand, some results may be wrong and in such conditions the valuation can only be wrong.
Just look at the result 2.0.6/1k.
3.9 MB/sec for output per char (stdio 1K) and 3.5 MB/sec for output per 8k block. How is it possible? IMHO, it is not possible and the right results are about 3.7 MB/sec for the both benchmarks with my configuration.
3.5 MB/sec for input per char (stdio 1K) and 4.3 MB/sec for input per 8k block. IMHO, the result for input per char (stdio 1k) is wrong. Input per char speed is about the same as input per block with my configuration under linux-2.
Is the rewrite result wrong too?
How to prove that?
I just try a bonnie benchmark with a file size of about 10*the main memory size. (I used the end of the medium that is slower) For 32MB, I tred "bonnie -s 300". I just disable "seeks" to not damage my SD.
-------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 2.0.6/1k 300 3340 64.8 3207 20.0 1604 18.3 3562 72.3 3853 20.0
Results are now less wrong.
Linux do lots of caching and asynchronous write operations (even for some meta-data).
Writing with putc() fill cache(s) Rewriting flush some data of the previous step Writing intelligently same Reading with getc() same Reading intelligently IS THE ONLY RIGHT RESULT.
So, the speed result for "putc" is too high and the speed result for "getc" is too low.
Rewrite is about ok, but we are lucky.
Now, I am sure that "asynchronous read-ahead" in Linux is ok and that Linux is not mad enough to write characters faster that blocks.
Now, I will consider "random seeks" difference between 1.2.13 and 2.0.6 with 1 K fs.
2.0.6 result is 54.9/sec and 1.2.13 result is 99.7/sec.
That is a well-known problem that appears with the new page-cache. The reason is well-known too. During "bonnie seek operations", the buffer cache is so shrinked that indirect blocks cannot be cached. So almost each time "bonnie" access a block, the corresponding indirect have to be read from the disk.
With 4k fs, the actual number of indirect blocks is 16 times lower (fix me) than 1k fs, and the buffer cache is more fortunate with indirect blocks caching.
Even if this result seems to be very bad for Linux-2, this phenomenon seems to not affect performances for "normal" disk IO usage.
Bonnie benchmark is indeed unfair.
Gerard.
|  |