[lkml]   [2010]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH 01/11] IMA: use rbtree instead of radix tree for inode information cache
    On Tue, 2010-10-26 at 10:22 +1100, Dave Chinner wrote:
    > On Mon, Oct 25, 2010 at 02:41:18PM -0400, Eric Paris wrote:
    > > The IMA code needs to store the number of tasks which have an open fd
    > > granting permission to write a file even when IMA is not in use. It needs
    > > this information in order to be enabled at a later point in time without
    > > losing it's integrity garantees. At the moment that means we store a
    > > little bit of data about every inode in a cache. We use a radix tree key'd
    > > on the inode's memory address. Dave Chinner pointed out that a radix tree
    > > is a terrible data structure for such a sparse key space. This patch
    > > switches to using an rbtree which should be more efficient.
    > I'm not sure this is the right fix, though.
    > Realistically, there is a 1:1 relationship between the inode and the
    > IMA information. I fail to see why an external index is needed here
    > at all - just use a separate structure to store the IMA information
    > that the inode points to. That makes the need for a new global index
    > and global lock go away completely.

    I guess I did a bad job explaining my 1:1 relationship comments. I only
    need the i_readcount in a 1:1 manor. (I'm also using the already
    existing i_writecount) So IMA needs some information in a 1:1
    relationship, but everything else in the IMA structure is only needed
    when 'a measurement policy is loaded.'

    I believe that IBM is going to look into making i_readcount a first
    class citizen which can be used by both IMA and generic_setlease().
    Then people could say IMA had 0 per inode overhead :)

    > You're already adding 8 bytes to the inode, so why not make it a
    > pointer.

    4 + 4 padding. Yes.

    > We've got 4 conditions:

    You're suggesting we go to 4 conditions? Today we have 3.

    > 1. not configured - no overhead
    > 2. configured, boot time disabled - 8 bytes per inode
    > 3. configured, boot time enabled, runtime disabled - 8 bytes per
    > inode + small IMA structure

    2 and 3 are the same today, and both are 4+4. I believe your suggestion
    would be for #3 would be 8 bytes in inode pointing to a 4+4 byte
    structure. I don't really know if that gets us anything.

    > 4. configured, boot time enabled, runtime enabled - 8 bytes per
    > inode + large IMA structure

    > Anyone who wants the option of runtime enablement can take the extra
    > allocation overhead, but otherwise nobody is affected apart from 8
    > bytes of additional memory per inode. I doubt that will change
    > anything unless it increases the size of the inode enough to push it
    > over slab boundaries. And if LSM stacking is introduced, then that 8
    > bytes per inode overhead will go away, anyway.

    At least it gets shifted so you don't see it. Can't say it goes

    > This approach doesn't introduce new global lock and lookup overhead
    > into the main VFS paths, allows you to remove a bunch of code and
    > has a path forward for removing the 8 byte per inode overhead as
    > well. Seems like the best compromise to me....

    End of my patch series there are no global locks in main VFS paths
    (unless you load an ima measurement policy). I realize that this patch
    switches an rcu_readlock() to a spin_lock() and maybe that's what you
    means, but you'll find that I drop ALL locking on core paths when you
    don't load a measurement policy in 10/11


     \ /
      Last update: 2010-10-26 02:15    [W:0.024 / U:32.932 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site