lkml.org 
[lkml]   [2006]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: 2.6.17-rc6-mm1 -- BUG: possible circular locking deadlock detected!
From
Date
On Fri, 2006-06-09 at 09:19 +0200, Ingo Molnar wrote: 
> * Anton Altaparmakov <aia21@cam.ac.uk> wrote:
> > This last lookup of physical location in map_mft_record() [actually in
> > readpage as map_mft_record() reads the page cache page containing the
> > record] cannot require us to load an extent mft record because the
> > runlist is complete so we cannot deadlock here.
>
> ah! So basically, the locking sequences the validator observed during
> load_system_files() are a one-time event that can only occur during
> ntfs_fill_super().
>
> if we ignore those dependencies (of the initial reading of the MFT inode
> generating recursive map_mft_record() calls) and take into account that
> the MFT inode will never again trigger map_mft_record() calls, then the
> locking is conflict-free after mounting has finished, right?

Correct! (-:

> so the validator is technically correct that load_system_files()
> generates locking patterns that have dependency conflicts - but that
> code is guaranteed to be single-threaded because it's a one time event
> and because the VFS has no access to the filesystem at that time yet so
> there is zero parallellism.

Correct.

I just sent you a long description of the process before reading your
email above... Did not need to bother if I knew you already understood
the case!

> can you think of any simple and clean solution that would avoid those
> conflicting dependencies within the NTFS code? Perhaps some way to read
> the MFT inode [and perhaps other, always-cached inodes that are later
> used as metadata] without locking? I can code it up and test it, but i
> dont think i can find the best method myself :-)

I really do not want to do that. It would penalise ntfs performance in
the general case by introducing "if in mount don't lock otherwise lock"
checks everywhere in the driver pretty much or you would need to
duplicate a ton of code which I avoided duplicating exactly using my
bootstrapping mechanism which is why it exists in the first place!

And given the only reason for the change you want to make is to make the
lock validator happy I would much rather you did something with the lock
validator instead... Can't you just insert a call to the validator in
ntfs_fill_super() just before the call to ntfs_read_inode_mount() to the
effect of "turn_off_lock_validator_data_gathering()" and then just after
ntfs_read_inode_mount() returns insert a call to the validator to the
effect of "turn_on_lock_validator_data_gathering()".

Does that sound reasonable to you?

I really cannot see why code needs to be penalised for a validator... I
would much rather see a complicated and slow lock validator then a
complicated and slow driver given a lock validator is only compiled in
for debugging purposes whilst a driver is compiled in all kernels
including release ones...

Best regards,

Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer / IRC: #ntfs on irc.freenode.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-06-09 13:36    [W:0.114 / U:26.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site