[lkml]   [2005]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: large files unnecessary trashing filesystem cache?
    On Oct 19, 2005, at 13:58:37, Guido Fiala wrote:
    > Kernel could do the best to optimize default performance,
    > applications that consider their own optimal behaviour should do
    > so, all other files are kept under default heuristic policy
    > (adaptable, configurable one)
    > Heuristic can be based on access statistic:
    > streaming/sequential can be guessed by getting exactly 100% cache
    > hit rate (drop behind pages immediately),

    What about a grep through my kernel sources or other linear search
    through a large directory tree? That would get exactly 100% cache
    hit rate which would cause your method to drop the pages immediately,
    meaning that subsequent greps are equally slow. I have enough memory
    to hold a couple kernel trees and I want my grepping to push
    out of RAM for a bit while I do my kernel development.

    Kyle Moffett

    I lost interest in "blade servers" when I found they didn't throw
    knives at people who weren't supposed to be in your machine room.
    -- Anthony de Boer

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-10-19 20:45    [W:0.019 / U:0.788 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site