lkml.org 
[lkml]   [2012]   [Oct]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: Heads-up: 3.6.2 / 3.6.3 NFS server oops: 3.6.2+ regression? (also an unrelated ext4 data loss bug)
Date
> -----Original Message-----
> From: Nix [mailto:nix@esperi.org.uk]
> Sent: Tuesday, October 23, 2012 1:36 PM
> To: Myklebust, Trond
> Cc: J. Bruce Fields; Ted Ts'o; linux-kernel@vger.kernel.org; Schumaker,
> Bryan; Peng Tao; gregkh@linuxfoundation.org; linux-nfs@vger.kernel.org;
> Stanislav Kinsbursky
> Subject: Re: Heads-up: 3.6.2 / 3.6.3 NFS server oops: 3.6.2+ regression? (also
> an unrelated ext4 data loss bug)
>
> On 23 Oct 2012, nix@esperi.org.uk uttered the following:
>
> > On 23 Oct 2012, Trond Myklebust spake thusly:
> >> On Tue, 2012-10-23 at 12:46 -0400, J. Bruce Fields wrote:
> >>> Looks like there's some confusion about whether nsm_client_get()
> >>> returns NULL or an error?
> >>
> >> nsm_client_get() looks extremely racy in the case where ln->nsm_users
> >> == 0. Since we never recheck the value of ln->nsm_users after taking
> >> nsm_create_mutex, what is stopping 2 different threads from both
> >> setting
> >> ln->nsm_clnt and re-initialising ln->nsm_users?
> >
> > Yep. At the worst possible time:
> >
> > spin_lock(&ln->nsm_clnt_lock);
> > if (ln->nsm_users) {
> > if (--ln->nsm_users)
> > ln->nsm_clnt = NULL;
> > (1) shutdown = !ln->nsm_users;
> > }
> > spin_unlock(&ln->nsm_clnt_lock);
> >
> > If a thread reinitializes nsm_users at point (1), after the
> > assignment, we could well end up with ln->nsm_clnt NULL and shutdown
> > false. A bit later, nsm_mon_unmon gets called with a NULL clnt, and boom.
>
> Possible fix if so, utterly untested so far (will test when I can face yet another
> reboot and fs-corruption-recovery-hell cycle, in a few hours), may ruin
> performance, violate locking hierarchies, and consume
> kittens:
>
> diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c index e4fb3ba..da91cdf 100644
> --- a/fs/lockd/mon.c
> +++ b/fs/lockd/mon.c
> @@ -98,7 +98,6 @@ static struct rpc_clnt *nsm_client_get(struct net *net)
> spin_unlock(&ln->nsm_clnt_lock);
> goto out;
> }
> - spin_unlock(&ln->nsm_clnt_lock);
>
> mutex_lock(&nsm_create_mutex);
> clnt = nsm_create(net);
> @@ -108,6 +107,7 @@ static struct rpc_clnt *nsm_client_get(struct net *net)
> ln->nsm_users = 1;
> }
> mutex_unlock(&nsm_create_mutex);
> + spin_unlock(&ln->nsm_clnt_lock);

You can't hold a spinlock while sleeping. Both mutex_lock() and nsm_create() can definitely sleep.

The correct way to do this is to grab the spinlock and recheck the value of ln->nsm_users inside the 'if (!IS_ERR())' condition. If it is still zero, bump it and set ln->nsm_clnt, otherwise bump it, get the existing ln->nsm_clnt and call rpc_shutdown_clnt() on the redundant nsm client after dropping the spinlock.

Cheers
Trond


\
 
 \ /
  Last update: 2012-10-23 20:21    [W:0.278 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site