Messages in this thread | | | From | Ingo Oeser <> | Subject | Re: large files unnecessary trashing filesystem cache? | Date | Wed, 19 Oct 2005 17:54:11 +0200 |
| |
Hi,
On Wednesday 19 October 2005 13:10, gfiala@s.netic.de wrote: > Zitat von Andrew Morton <akpm@osdl.org>: > > So I'd also suggest a new resource limit which, if set, is copied into the > > applications's file_structs on open(). So you then write a little wrapper > > app which does setrlimit()+exec(): > > > > limit-cache-usage -s 1000 my-fave-backup-program <args> > > > > Which will cause every file which my-fave-backup-program reads or writes to > > be limited to a maximum pagecache residency of 1000 kbytes. > > Or make it another 'ulimit' parameter...
Which is already there: There is an ulimit for "maximum RSS", which is at least a superset of "maximum pagecache residency".
This is already settable and known by many admins. But AFAIR it is not honoured by the kernel completely, right?
But per file is a much better choice, since this would allow concurrent streaming. This is needed to implement timeshifting at least[1].
So either I miss something or this is no proper solution yet.
Regards
Ingo Oeser
[1] Which is obviously done by some kind of on-disk FIFO.
[unhandled content-type:application/pgp-signature] | |