Messages in this thread | | | Date | Wed, 19 Oct 2005 13:10:32 +0200 | From | gfiala@s ... | Subject | Re: large files unnecessary trashing filesystem cache? |
| |
Zitat von Andrew Morton <akpm@osdl.org>:
> An obvious approach would be an LD_PRELOAD thingy which modifies read() and > write(), perhaps controlled via an environment variable. AFAIK nobody has > even attempted this.
Sounds interesting.
> A decent kernel implementation would be to add a max_resident_pages to > struct file_struct and to use that to perform drop-behind within read() and > write(). That's a bit of arithmetic and a call to > invalidate_mapping_pages(). The userspace interface to that could be a > linux-specific extension to posix_fadvise() or to fcntl().
Would still like to have a way to configure a "default file policy/heuristics" for the system, just like i can choose IO-scheduler.
> > But that still requires that all the applications be modified. > > So I'd also suggest a new resource limit which, if set, is copied into the > applications's file_structs on open(). So you then write a little wrapper > app which does setrlimit()+exec(): > > limit-cache-usage -s 1000 my-fave-backup-program <args> > > Which will cause every file which my-fave-backup-program reads or writes to > be limited to a maximum pagecache residency of 1000 kbytes.
Or make it another 'ulimit' parameter...
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |