Messages in this thread | | | Subject | Re: large files unnecessary trashing filesystem cache? | Date | Wed, 19 Oct 2005 01:45:45 -0400 | From | Andrew James Wade <> |
| |
On Wednesday 19 October 2005 00:37, Andrew Morton wrote: > Andrew James Wade <andrew.j.wade@gmail.com> wrote: > > > > Sometimes you want a single file to take up most of the memory; databases > > spring to mind. Perhaps files/processes that take up a large proportion of > > memory should be penalized by preferentially reclaiming their pages, but > > limit the aggressiveness so that they can still take up most of the memory > > if sufficiently persistent (and the rest of the system isn't thrashing). > > Yes. Basically any smart heuristic we apply here will have failure modes. > For example, the person whose application does repeated linear reads of the > first 100MB of a 4G file will get very upset.
As will any dumb heuristic for that matter; we'd need precognition[1] to avoid all of them. But we can hopefully make the failure modes rarer and more predictable. I don't know how my proposal would fare, and as I do not have the code to test the matter I think I shall drop it.
[1] Which could, on occasion, be provided by hinting. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |