[lkml]   [2005]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: large files unnecessary trashing filesystem cache?
    On Tue, 2005-10-18 at 23:58 +0200, Bodo Eggert wrote:
    > Badari Pulavarty <> wrote:
    > > On Tue, 2005-10-18 at 22:01 +0200, Guido Fiala wrote:
    > [large files trash cache]
    > > Is there a reason why those applications couldn't use O_DIRECT ?
    > The cache trashing will affect all programs handling large files:
    > mkisofs * > iso
    > dd < /dev/hdx42 | gzip > imagefile
    > perl -pe's/filenamea/filenameb/' < iso | cdrecord - # <- never tried

    Are these examples which demonstrate the thrashing problem.
    Few product (database) groups here are trying to get me to
    work on a solution before demonstrating the problem. They
    also claim exactly what you are saying. They want a control
    on how many pages (per process or per file or per filesystem
    or system wide) you can have in filesystem cache.

    Thats why I am pressing to find out the real issue behind this.
    If you have a demonstratable testcase, please let me know.
    I will be happy to take a look.

    > Changing a few programs will only partly cover the problems.
    > I guess the solution would be using random cache eviction rather than
    > a FIFO. I never took a look the cache mechanism, so I may very well be
    > wrong here.

    Read-only pages should be re-cycled really easily & quickly. I can't
    belive read-only pages are causing you all the trouble.


    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-10-19 01:13    [W:0.028 / U:105.540 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site