[lkml]   [2005]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: large files unnecessary trashing filesystem cache?
    Ingo Oeser <> wrote:
    > Hi,
    > On Wednesday 19 October 2005 13:10, wrote:
    > > Zitat von Andrew Morton <>:

    Please don't edit Cc lines. Just do reply-to-all.

    > > > So I'd also suggest a new resource limit which, if set, is copied into the
    > > > applications's file_structs on open(). So you then write a little wrapper
    > > > app which does setrlimit()+exec():
    > > >
    > > > limit-cache-usage -s 1000 my-fave-backup-program <args>
    > > >
    > > > Which will cause every file which my-fave-backup-program reads or writes to
    > > > be limited to a maximum pagecache residency of 1000 kbytes.
    > >
    > > Or make it another 'ulimit' parameter...

    That's what I said. ulimit is the shell interface to resource limits.

    > Which is already there: There is an ulimit for "maximum RSS",
    > which is at least a superset of "maximum pagecache residency".

    RSS is a quite separate concept from pagecache.

    > This is already settable and known by many admins. But AFAIR it is not
    > honoured by the kernel completely, right?
    > But per file is a much better choice, since this would allow
    > concurrent streaming. This is needed to implement timeshifting at least[1].
    > So either I miss something or this is no proper solution yet.

    I described a couple of ways in which this can be done from userspace with
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-10-19 21:52    [W:0.049 / U:145.236 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site