lkml.org 
[lkml]   [2016]   [Jun]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: performance delta after VFS i_mutex=>i_rwsem conversion
On Mon, Jun 06, 2016 at 02:46:44PM -0700, Linus Torvalds wrote:

> Let's look at smaller changes first.
>
> In particular, why the f*ck do we take "next->d_lock" at all, much less twice?

See upthread re "low-hanging fruit".

> Do we really need that? Both of them seem bogus:
>
> - the first one only protects the "simple_positive()" test.
>
> That seems stupid. We hold the parent d_lock _and_ we hold the
> parent inode lock for reading, how the hell is simple_positive() going
> to change? Yeah, yeah, *maybe* we could catch a lookup that just
> happens to add the inode field in process, but it's not a race we even
> care about.

Can't happen - it's ramfs and lookups there never end up adding a positive
entry. So I don't believe that READ_ONCE() or anything of that sort would
be needed. All transitions from negative to positive happen under exclusive
lock on parent, which gives us all barriers we need. Transitions from
hashed positive to negative or unhashed also happen only under the same
exclusive lock on parent, which takes care of going in other direction.

> If we see the inode being non-NULL, it will now *stay*
> non-NULL, and we already depend on that (that "d_inode(next)" is then
> done without the lock held.

Like I said - it's stable.

> - the second one only protects "list_move()" of the cursor. But since
> it's the child list, the "next->d_lock" thing ends up being
> irrelevant. It's the parent dentry lock we need to hold, nothing else.
> Not the "next" one.
>
> so I don't see the point of half the d_lock games we play.

Yes.

> And the thing about spinlock contention: having *nested* spinlocks be
> contented turns contention into an exponential thing. I really suspect
> that if we can just remove the nested spinlock, the dentry->d_lock
> contention will go down by a huge amount, because then you completely
> remove the "wait on lock while holding another lock" thing, which is
> what tends to really suck.

True in general, but here we really do a lot under that ->d_lock - all
list traversals are under it. So I suspect that contention on nested
lock is not an issue in that particular load. It's certainly a separate
commit, so we'll see how much does it give on its own, but I doubt that
it'll be anywhere near enough.

\
 
 \ /
  Last update: 2016-06-07 00:21    [W:0.467 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site