lkml.org 
[lkml]   [2010]   [Oct]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 11/18] fs: Introduce per-bucket inode hash locks
On Fri, Oct 08, 2010 at 02:54:09PM -0400, Christoph Hellwig wrote:
> > +struct inode_hash_bucket {
> > + struct hlist_bl_head head;
> > +};
> > +
> > +static inline void spin_lock_bucket(struct inode_hash_bucket *b)
> > +{
> > + bit_spin_lock(0, (unsigned long *)b);
> > +}
> > +
> > +static inline void spin_unlock_bucket(struct inode_hash_bucket *b)
> > +{
> > + __bit_spin_unlock(0, (unsigned long *)b);
> > +}
>
> I've looked at the dcache version of this again, and I really hate
> duplicating these helpers in the dcache code aswell. IMHO they
> should simple operate directly on the hlist_bl_head, as that's
> what it was designed for. I also don't really see any point in
> wrapping the hlist_bl_head as inode_hash_bucket. If the bucket naming
> is important we could rename the hlist_bl stuff to bl_hash, and the
> hlist_bl_head could become bl_hash_bucket.

It was done because someone, like -rt, might want more than one bit of
memory to implement a lock. They would have to make a few other
changes, granted, but this helps reduce a lot of churn.

I didn't see the point of a layer of dumb wrappers for hlist_bl_head
locking. Just reproducing bit spin and wait locks in wrappers when we
already have good functions for them.



\
 
 \ /
  Last update: 2010-10-16 09:59    [from the cache]
©2003-2011 Jasper Spaans