lkml.org 
[lkml]   [2005]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: large files unnecessary trashing filesystem cache?
Date
On Wednesday 19 October 2005 06:10, Lee Revell wrote:
> On Tue, 2005-10-18 at 22:01 +0200, Guido Fiala wrote:
> > Of course one could always implement f_advise-calls in all
> > applications
>
> Um, this seems like the obvious answer. The application doing the read
> KNOWS it's a streaming read, while the best the kernel can do is guess.
>
> You don't really make much of a case that fadvise can't do the job.
>

Kernel could do the best to optimize default performance, applications that
consider their own optimal behaviour should do so, all other files are kept
under default heuristic policy (adaptable, configurable one)

Heuristic can be based on access statistic:

streaming/sequential can be guessed by getting exactly 100% cache hit rate
(drop behind pages immediately),

random access/repeated reads can be guessed by >100% hit rate (keep as much in
memory as possible).

Less than 100% hit rate is already handled sanely i guess by reducing
readahead, precognition would gather access patterns (every n-th block is
read so readahead every n-th block, unlikely scenario i guess, but might
happen in databases).

How about backward-read-files? Others?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-10-19 20:03    [W:0.443 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site