lkml.org 
[lkml]   [2005]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: large files unnecessary trashing filesystem cache?
Date
On Oct 19, 2005, at 13:58:37, Guido Fiala wrote:
> Kernel could do the best to optimize default performance,
> applications that consider their own optimal behaviour should do
> so, all other files are kept under default heuristic policy
> (adaptable, configurable one)
>
> Heuristic can be based on access statistic:
>
> streaming/sequential can be guessed by getting exactly 100% cache
> hit rate (drop behind pages immediately),

What about a grep through my kernel sources or other linear search
through a large directory tree? That would get exactly 100% cache
hit rate which would cause your method to drop the pages immediately,
meaning that subsequent greps are equally slow. I have enough memory
to hold a couple kernel trees and I want my grepping to push OO.org
out of RAM for a bit while I do my kernel development.


Cheers,
Kyle Moffett

--
I lost interest in "blade servers" when I found they didn't throw
knives at people who weren't supposed to be in your machine room.
-- Anthony de Boer


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-10-19 20:45    [W:0.468 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site