Messages in this thread |  | | Subject | Re: Terrible VM in 2.4.11+? | From | Austin Gonyou <> | Date | 08 Jul 2002 18:04:45 -0500 |
| |
Here's the params I'm running...but it is with a -aa tree, just FYI.
vm.bdflush = 60 1000 0 0 1000 800 60 50 0
On Mon, 2002-07-08 at 17:50, Lukas Hejtmanek wrote: > Yes, I know a few people that reports it works well for them. How ever for me > and some other do not. System is redhat 7.2, ASUS A7V MB, /dev/hda is on promise > controller. Following helps a lot: > > while true; do sync; sleep 3; done > > How did you modify the params of bdflush? I do not want to suspend i/o buffers > nor disk cache.. > > Another thing to notice, the X server has almost every time some pages swaped to > the swap space on /dev/hda. When bdflushd is flushing buffers X server stops as > has no access to the swap area during i/o lock. > > On Mon, Jul 08, 2002 at 05:37:02PM -0500, Austin Gonyou wrote: > > I do things like this regularly, and have been using kernels 2.4.10+ on > > many types of boxen, but have yet to see this behavior. I've done this > > same type of test with 16k blocks up to 10M, and not had this problem I > > usually do test with regard to I/O on SCSI, but have tested on IDE, > > since we use many IDE systems for developers. I found though, that using > > something like LVM, and overwhelming it, causes bdflush to go crazy. I > > can hit the wall you refer to then.When bdflushd is too busy...it does > > in fact seem to *lock* the system, but of course..it's just bdflush > > doing it's thing. If I modify the bdflush params..this causes things to > > work just fine, at least, useable. > > -- > Lukáš Hejtmánek -- Austin Gonyou <austin@digitalroadkill.net> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |