lkml.org 
[lkml]   [2005]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: large files unnecessary trashing filesystem cache?
Date
From
On Tuesday 18 October 2005 16:01, Guido Fiala wrote:
> (please note, i'am not subscribed to the list, please CC me on reply)
>
> Story:
> Once in while we have a discussion at the vdr (video disk recorder) mailing
> list about very large files trashing the filesystems memory cache leading to
> unnecessary delays accessing directory contents no longer cached.
>
> This program and certainly all applications that deal with very large files
> only read once (much larger than usual memory) - it happens that all other
> cached blocks of the filessystem are removed from memory solely to keep as
> much as possible of that file in memory, which seems to be a bad strategy in
> most situations.

For this particular workload, a heuristic to detect streaming and drop
pages a few mb back from currently accessed pages would probably work well.
I believe the second part is already in the kernel (activated by an f-advise
call), but the heuristic is lacking.

> Of course one could always implement f_advise-calls in all applications, but i
> suggest a discussion if a maximum (configurable) in-memory-cache on a
> per-file base should be implemented in linux/mm or where this belongs.
>
> My guess was, it has something to do with mm/readahead.c, a test limiting the
> result of the function "max_sane_readahead(...) to 8 MBytes as a quick and
> dirty test did not solve the issue, but i might have done something wrong.
>
> I've searched the archive but could not find a previous discussion - is this a
> new idea?

I'd do searches on thrashing control and swap tokens. The problem with
thrashing is similar: a process accessing large amounts of memory in a short
period of time blowing away the caches. And the solution should be similar:
penalize the process doing so by preferentially reclaiming it's pages.

> It would be interesting to discuss if and when this proposed feature could
> lead to better performance or has any unwanted side effects.

Sometimes you want a single file to take up most of the memory; databases
spring to mind. Perhaps files/processes that take up a large proportion of
memory should be penalized by preferentially reclaiming their pages, but
limit the aggressiveness so that they can still take up most of the memory
if sufficiently persistent (and the rest of the system isn't thrashing).

>
> Thanks for ideas on that issue.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-10-19 05:05    [W:0.133 / U:1.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site