Messages in this thread |  | | Subject | Re: Bad SSD performance with recent kernels | From | Shaohua Li <> | Date | Mon, 30 Jan 2012 15:22:38 +0800 |
| |
On Mon, 2012-01-30 at 08:13 +0100, Herbert Poetzl wrote: > On Mon, Jan 30, 2012 at 11:17:38AM +0800, Shaohua Li wrote: > > 2012/1/30 Wu Fengguang <wfg@linux.intel.com>: > >> On Sun, Jan 29, 2012 at 02:13:51PM +0100, Eric Dumazet wrote: > >>> Le dimanche 29 janvier 2012 à 19:16 +0800, Wu Fengguang a écrit : > > > >>>> Note that as long as buffered read(2) is used, it makes almost no > >>>> difference (well, at least for now) to do "dd bs=128k" or "dd bs=2MB": > >>>> the 128kb readahead size will be used underneath to submit read IO. > > > >>> Hmm... > > >>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768 > >>> 32768+0 enregistrements lus > >>> 32768+0 enregistrements écrits > >>> 4294967296 octets (4,3 GB) copiés, 20,7718 s, 207 MB/s > > > >>> # echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048 > >>> 2048+0 enregistrements lus > >>> 2048+0 enregistrements écrits > >>> 4294967296 octets (4,3 GB) copiés, 27,7824 s, 155 MB/s > > >> Interesting. Here are my test results: > > >> root@lkp-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=128k count=32768 > >> 32768+0 records in > >> 32768+0 records out > >> 4294967296 bytes (4.3 GB) copied, 19.0121 s, 226 MB/s > >> root@lkp-nex04 /home/wfg# echo 3 >/proc/sys/vm/drop_caches ;dd if=/dev/sda of=/dev/null bs=2M count=2048 > >> 2048+0 records in > >> 2048+0 records out > >> 4294967296 bytes (4.3 GB) copied, 19.0214 s, 226 MB/s > > >> Maybe the /dev/sda performance bug on your machine is sensitive to timing? > > I got similar result: > > 128k: 224M/s > > 1M: 182M/s > > > 1M block size is slow, I guess it's CPU related. > > > And as for the big regression with newer kernel than 2.6.38, > > please check if idle=poll helps. CPU idle dramatically impacts > > disk performance and even latest cpuidle governor doesn't help > > for some CPUs. > > here are the tests with idle=poll and after switching to 128k > (instead of 1M) blocksize (same amount of data transferred) > > kernel ------------ read /dev/sda ------------- > --- noop --- - deadline - ---- cfs --- > [MB/s] %CPU [MB/s] %CPU [MB/s] %CPU > -------------------------------------------------- > 3.2.2 45.82 3.7 44.85 3.6 45.04 3.4 > 3.2.2i 45.59 2.3 51.78 2.6 46.03 2.2 > 3.2.2i128 250.24 20.9 252.68 21.3 250.00 21.6 > > kernel -- write --- ------------------read ----------------- > --- noop --- --- noop --- - deadline - ---- cfs --- > [MB/s] %CPU [MB/s] %CPU [MB/s] %CPU [MB/s] %CPU > ---------------------------------------------------------------- > 3.2.2 270.95 42.6 162.36 9.9 162.63 9.9 162.65 10.1 > 3.2.2i 269.10 41.4 170.82 6.6 171.20 6.6 170.91 6.7 > 3.2.2i128 270.38 67.7 162.35 10.2 163.01 10.3 162.34 10.7 What's 3.2.2i and 3.2.2i128? does idle=poll help?
-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |