Messages in this thread | | | Date | Sun, 29 Jul 2007 14:12:15 +0100 | From | Alan Cox <> | Subject | Re: RFT: updatedb "morning after" problem [was: Re: -mm merge plans for 2.6.23] |
| |
> Contrived thing and all, but what it does do is show exactly how bad seeking > all over swap-space is. If you push it out before hitting enter, the time it > takes easily grows past 10 minutes (with my 768M) versus sub-second (!) when > it's all in to start with.
Think in "operations/second" and you get a better view of the disk.
> What are the tradeoffs here? What wants small chunks? Also, as far as I'm > aware Linux does not do things like up the granularity when it notices it's > swapping in heavily? That sounds sort of promising...
Small chunks means you get better efficiency of memory use - large chunks mean you may well page in a lot more than you needed to each time (and cause more paging in turn). Your disk would prefer you fed it big linear I/O's - 512KB would probably be my first guess at tuning a large box under load for paging chunk size.
More radically if anyone wants to do real researchy type work - how about log structured swap with a cleaner ?
Alan - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |