lkml.org 
[lkml]   [2010]   [Oct]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: fs: Inode cache scalability V3
On Wed, Oct 13, 2010 at 07:36:48PM -0400, Christoph Hellwig wrote:
> On Wed, Oct 13, 2010 at 05:46:09PM -0400, Christoph Hellwig wrote:
> > On Wed, Oct 13, 2010 at 11:58:45AM -0400, Christoph Hellwig wrote:
> > >
> > > It's 100% reproducible on my kvm VM. The bug is the assert_spin_locked
> > > in redirty_tail. I really can't find a way how we reach it without
> > > d_lock so this really confuses me.
> >
> > We are for some reason getting a block device inode that is on the
> > dirty list of a bdi that it doesn't point to. Still trying to figure
> > out how exactly that happens.
>
> It's because __blkdev_put reset the bdi on the mapping, and bdev inodes
> are still special cased to not use s_bdi unlike everybody else. So
> we keep switch between different bdis that get locked.
>
> I wonder what's a good workaround for that. Just flushing out all
> dirty state of a block device inode on last close would fix, but we'd
> still have all the dragons hidden underneath until we finally sort
> out the bdi reference mess.

Perhaps for the moment make __blkdev_put() move the inode onto the
dirty lists for the default bdi when it switches themin the
mapping? e.g. add a "inode_switch_bdi" helper that is only called in
this case?

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-10-14 01:59    [W:0.089 / U:0.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site