lkml.org 
[lkml]   [2005]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: large files unnecessary trashing filesystem cache?
Date
From
On Wednesday 19 October 2005 00:37, Andrew Morton wrote:
> Andrew James Wade <andrew.j.wade@gmail.com> wrote:
> >
> > Sometimes you want a single file to take up most of the memory; databases
> > spring to mind. Perhaps files/processes that take up a large proportion of
> > memory should be penalized by preferentially reclaiming their pages, but
> > limit the aggressiveness so that they can still take up most of the memory
> > if sufficiently persistent (and the rest of the system isn't thrashing).
>
> Yes. Basically any smart heuristic we apply here will have failure modes.
> For example, the person whose application does repeated linear reads of the
> first 100MB of a 4G file will get very upset.

As will any dumb heuristic for that matter; we'd need precognition[1] to avoid
all of them. But we can hopefully make the failure modes rarer and more
predictable. I don't know how my proposal would fare, and as I do not have
the code to test the matter I think I shall drop it.

[1] Which could, on occasion, be provided by hinting.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-10-19 07:48    [W:0.134 / U:0.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site