Messages in this thread | | | From | Kyle Moffett <> | Subject | Re: large files unnecessary trashing filesystem cache? | Date | Wed, 19 Oct 2005 14:43:06 -0400 |
| |
On Oct 19, 2005, at 13:58:37, Guido Fiala wrote: > Kernel could do the best to optimize default performance, > applications that consider their own optimal behaviour should do > so, all other files are kept under default heuristic policy > (adaptable, configurable one) > > Heuristic can be based on access statistic: > > streaming/sequential can be guessed by getting exactly 100% cache > hit rate (drop behind pages immediately),
What about a grep through my kernel sources or other linear search through a large directory tree? That would get exactly 100% cache hit rate which would cause your method to drop the pages immediately, meaning that subsequent greps are equally slow. I have enough memory to hold a couple kernel trees and I want my grepping to push OO.org out of RAM for a bit while I do my kernel development.
Cheers, Kyle Moffett
-- I lost interest in "blade servers" when I found they didn't throw knives at people who weren't supposed to be in your machine room. -- Anthony de Boer
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |