Messages in this thread |  | | Date | Sat, 27 Oct 2001 22:05:57 -0700 | From | Mike Fedyk <> | Subject | Re: xmm2 - monitor Linux MM active/inactive lists graphically |
| |
On Sat, Oct 27, 2001 at 03:14:44PM +0200, Giuliano Pochini wrote: > > > block: 1024 slots per queue, batch=341 > > > > Wrote 600.00 MB in 71 seconds -> 8.39 MB/s (7.5 %CPU) > > > > Still very spiky, and during the write disk is uncapable of doing any > > reads. IOW, no serious application can be started before writing has > > finished. Shouldn't we favour reads over writes? Or is it just that > > the elevator is not doing its job right, so reads suffer? > > > > procs memory swap io system cpu > > r b w swpd free buff cache si so bi bo in cs us sy id > > 0 1 1 0 3596 424 453416 0 0 0 40468 189 508 2 2 96 > > 341*127K = ~40M. > > Batch is too high. It doesn't explain why reads get delayed so much, anyway. >
Try modifying the elivator queue length with elvtune.
BTW, 2.2.19 has the queue lengths in the hundreds, and 2.4.xx has it in the thousands. I've set 2.4 kernels back to the 2.2 defaults, and interactive performance has gone up considerably. These are subjective tests though.
Mike - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |