Messages in this thread Patch in this message | | | Date | Wed, 10 Jan 2001 10:47:44 +0100 (CET) | Subject | Re: NFS client deadlock on SMP machines | From | Trond Myklebust <> |
| |
>>>>> " " == Andrew Morton <andrewm@uow.edu.au> writes:
> There appear to be two places where the NFS client code can > deadlock:
> nfs_reqlist_init() { > spin_lock(&nfs_flushd_lock); > rpc_new_task-> > rpc_allocate-> > -> kmalloc(GFP_RPC) (__GFP_WAIT is true)
> inode_remove_flushd() { > spin_lock(&nfs_flushd_lock); > iput(inode)-> > nfs_delete_inode-> > delete_inode-> > wait_on_inode > truncate_inode_pages-> > truncate_list_pages-> > wait_on_page
> The latter is most likely the problem. Here's a patch - please > test. The inode_remove_flush() change is correct. Not so sure > about the nfs_reqlist_init() change.
Doh. You're quite right on both accounts. I made a small modification to your patch for the rpc_new_task() problem (you forgot to release the RPC task if it's not used).
Linus, please apply for 2.4.1...
Cheers, Trond
--- linux-2.4.0/fs/nfs/flushd.c.orig Wed Jun 21 16:25:17 2000 +++ linux-2.4.0/fs/nfs/flushd.c Wed Jan 10 10:44:08 2001 @@ -71,18 +71,17 @@ int status = 0; dprintk("NFS: writecache_init\n"); + + /* Create the RPC task */ + if (!(task = rpc_new_task(server->client, NULL, RPC_TASK_ASYNC))) + return -ENOMEM; + spin_lock(&nfs_flushd_lock); cache = server->rw_requests; if (cache->task) goto out_unlock; - /* Create the RPC task */ - status = -ENOMEM; - task = rpc_new_task(server->client, NULL, RPC_TASK_ASYNC); - if (!task) - goto out_unlock; - task->tk_calldata = server; cache->task = task; @@ -99,6 +98,7 @@ return 0; out_unlock: spin_unlock(&nfs_flushd_lock); + rpc_release_task(task); return status; } @@ -195,7 +195,9 @@ if (*q) { *q = inode->u.nfs_i.hash_next; NFS_FLAGS(inode) &= ~NFS_INO_FLUSH; + spin_unlock(&nfs_flushd_lock); iput(inode); + return; } out: spin_unlock(&nfs_flushd_lock); - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
| |