lkml.org 
[lkml]   [2015]   [May]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 4/4] Staging: lustre: sparse lock warning fix
Date
On May 21, 2015, at 11:12 AM, Dan Carpenter wrote:

> Oh, sorry, I didn't read your patch very carefully. It won't cause a
> deadlock. But I'm going to assume it's still not right until lustre
> expert Acks it.

I just took a closer look and it appears original code is buggy and the patch just propagates the bugginess.

If we look at the nrs_policy_put_locked, it eventually ends up in nrs_policy_stop0,
it would hold a lock on whatever happened to be the first policy in the array not NULL.
But nrs_policy_stop0 would unlock the lock on the policy it was called on (already likely a deadlock material) and then relock it.

The problems would arise only if there are more than one nrs policy registered which is theoretically possible, but certainly makes no sense a client (besides, none of the advanced NRS policies
made it in anyway and I almost feel like they just add unnecessary complication in client-only code).

The code looks elaborate enough as if the first policy lock is to be always used as the guardian lock, but then stop0 behavior might be a bug then?
Or it's possible we never end up in stop0 due to nrs state machine?
Let's see what Nikitas and Liang remember about any of this (one of them is the original author of this code, but I am not sure who.)

Nikitas, Liang: The code in question is in nrs_resource_put_safe:
for (i = 0; i < NRS_RES_MAX; i++) {
if (pols[i] == NULL)
continue;

if (nrs == NULL) {
nrs = pols[i]->pol_nrs;
spin_lock(&nrs->nrs_lock);
}
nrs_policy_put_locked(pols[i]);
}

if (nrs != NULL)
spin_unlock(&nrs->nrs_lock);



\
 
 \ /
  Last update: 2015-05-21 18:21    [W:0.196 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site