lkml.org 
[lkml]   [2004]   [Sep]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC][PATCH] inotify 0.9
    On Thu, 16 Sep 2004, Jan Kara wrote:

    > > John McCutchan wrote:
    > > >Hello,
    > > >
    > > >I am releasing a new version of inotify. Attached is a patch for
    > > >2.6.8.1.
    > <snip>
    >
    > > >--MEMORY USAGE--
    > > >
    > > >The inotify data structures are light weight:
    > > >
    > > >inotify watch is 40 bytes
    > > >inotify device is 68 bytes
    > > >inotify event is 272 bytes
    > > >
    > > >So assuming a device has 8192 watches, the structures are only going
    > > >to consume 320KB of memory. With a maximum number of 8 devices allowed
    > > >to exist at a time, this is still only 2.5 MB
    > > >
    > > >Each device can also have 256 events queued at a time, which sums to
    > > >68KB per device. And only .5 MB if all devices are opened and have
    > > >a full event queue.
    > > >
    > > >So approximately 3 MB of memory are used in the rare case of
    > > >everything open and full.
    > > >
    > > >Each inotify watch pins the inode of a directory/file in memory,
    > > >the size of an inode is different per file system but lets assume
    > > >that it is 512 byes.
    > > >
    > > >So assuming the maximum number of global watches are active, this would
    > > >pin down 32 MB of inodes in the inode cache. Again not a problem
    > > >on a modern system.
    > >
    > > Did you work for Microsoft? Bloat doesn't count? And is this going to be
    > > low memory you pin? And is every file create or delete (or update of
    > > atime) going to blast this mess through cache looking for people to notify?
    > > >
    > > >On smaller systems, the maximum watches / events could be lowered
    > > >to provide a smaller foot print.
    > >
    > > Let's rethink this and say the max is by default and by use of proc or
    > > sys or whatever's in vogue today you can enable the feature by setting a
    > > non-zero value.
    > As I understand the patch it won't have any nontrivial memory
    > footprint in case you won't use inotify. Only in case someone wants to
    > watch inode, appropriate structure is allocated, inode pined etc. The
    > numbers above are in the case you watch maximum possible number of
    > inodes etc...

    The point I was making is that this doesn't scale well, because it eats
    resources which may be unavailable on many systems, and which others are
    trying to conserve. Since this may limit the use it presents a problem
    with usefulness.

    > Maybe you should not be so fast in using your flamethrower;)

    I didn't intend this as a flame, but I do feel this implementation doesn't
    scale. I offered another approach off the top of my head, which appears to
    me to be more scalable. I claimed no expertise, I just made a suggestion,
    based on my first thought on how I would attack the problem in a way which
    appears more scalable.

    If we are going to 4k stack because larger memory blocks are hard to find,
    I have to suspect that anything which locks up blocks size in MB is going
    to cause problems. I didn't even ask what would happen on NUMA machines,
    because that's not my usual concern.

    I'm still horified by the memory requirements :-(

    > --
    > Jan Kara <jack@suse.cz>
    > SuSE CR Labs
    >

    Thanks for taking the time to note that my tone may have been harsh even
    if my point was valid.


    --
    bill davidsen <davidsen@tmr.com>
    CTO, TMR Associates, Inc
    Doing interesting things with little computers since 1979.

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:06    [W:4.072 / U:0.152 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site