Messages in this thread | | | Subject | Re: large files unnecessary trashing filesystem cache? | From | Badari Pulavarty <> | Date | Tue, 18 Oct 2005 16:05:53 -0700 |
| |
On Tue, 2005-10-18 at 23:58 +0200, Bodo Eggert wrote: > Badari Pulavarty <pbadari@gmail.com> wrote: > > On Tue, 2005-10-18 at 22:01 +0200, Guido Fiala wrote: > > [large files trash cache] > > > Is there a reason why those applications couldn't use O_DIRECT ? > > The cache trashing will affect all programs handling large files: > > mkisofs * > iso > dd < /dev/hdx42 | gzip > imagefile > perl -pe's/filenamea/filenameb/' < iso | cdrecord - # <- never tried >
Are these examples which demonstrate the thrashing problem. Few product (database) groups here are trying to get me to work on a solution before demonstrating the problem. They also claim exactly what you are saying. They want a control on how many pages (per process or per file or per filesystem or system wide) you can have in filesystem cache.
Thats why I am pressing to find out the real issue behind this. If you have a demonstratable testcase, please let me know. I will be happy to take a look.
> Changing a few programs will only partly cover the problems. > > I guess the solution would be using random cache eviction rather than > a FIFO. I never took a look the cache mechanism, so I may very well be > wrong here.
Read-only pages should be re-cycled really easily & quickly. I can't belive read-only pages are causing you all the trouble.
Thanks, Badari
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |