lkml.org 
[lkml]   [2003]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] 2.6.0 NBD driver: remove send/recieve race for request
Lou Langholtz wrote:
>
> Paul Clements wrote:
>
> >>Except that in the error case, the send basically didn't succeed. So no
> >>need to worry about recieving a reply and no race possibility in that case.
> >
> >As long as the request is on the queue, it is possible for nbd-client to
> >die, thus freeing the request (via nbd_clear_que -> nbd_end_request),
> >and leaving us with a race between the free and do_nbd_request()
> >accessing the request structure.
>
> Quite right. I missed that case in this last patch (when nbd_do_it has
> returned and NBD_DO_IT is about to call nbd_clear_que [1]). Just moving
> the errors increment (near the end of nbd_send_req) to within the
> semaphore protected region would fix this particular case. An even
> larger race window exists with the request getting free'd when
> nbd-client is used to disconnect in which it calls NBD_CLEAR_QUE before
> NBD_DISCONNECT [2]. In this case, moving the errors increment doesn't
> help of course since the nbd_clear_queue in 2.6.0-test2 doesn't bother
> to check the tx_lock semaphore anyway. I believe reference counting the
> request (as you suggest) would protect against both these windows though.

> Will you be working on closing the other clear-queue race also then?

Here's the patch to fix up several race conditions in nbd. It requires
reverting the already included (but admittedly incomplete)
nbd-race-fix.patch that's in -mm5.

Andrew, please apply.

Thanks,
Paul--- linux-2.6.0-test2-mm4-PRISTINE/drivers/block/nbd.c Sun Jul 27 12:58:51 2003
+++ linux-2.6.0-test2-mm4/drivers/block/nbd.c Thu Aug 7 18:02:23 2003
@@ -416,11 +416,19 @@ void nbd_clear_que(struct nbd_device *lo
BUG_ON(lo->magic != LO_MAGIC);
#endif

+retry:
do {
req = NULL;
spin_lock(&lo->queue_lock);
if (!list_empty(&lo->queue_head)) {
req = list_entry(lo->queue_head.next, struct request, queuelist);
+ if (req->ref_count > 1) { /* still in xmit */
+ spin_unlock(&lo->queue_lock);
+ printk(KERN_DEBUG "%s: request %p: still in use (%d), waiting...\n",
+ lo->disk->disk_name, req, req->ref_count);
+ schedule_timeout(HZ); /* wait a second */
+ goto retry;
+ }
list_del_init(&req->queuelist);
}
spin_unlock(&lo->queue_lock);
@@ -490,6 +498,7 @@ static void do_nbd_request(request_queue
}

list_add(&req->queuelist, &lo->queue_head);
+ req->ref_count++; /* make sure req does not get freed */
spin_unlock(&lo->queue_lock);

nbd_send_req(lo, req);
@@ -499,12 +508,14 @@ static void do_nbd_request(request_queue
lo->disk->disk_name);
spin_lock(&lo->queue_lock);
list_del_init(&req->queuelist);
+ req->ref_count--;
spin_unlock(&lo->queue_lock);
nbd_end_request(req);
spin_lock_irq(q->queue_lock);
continue;
}

+ req->ref_count--;
spin_lock_irq(q->queue_lock);
continue;

@@ -548,27 +559,27 @@ static int nbd_ioctl(struct inode *inode
if (!lo->sock)
return -EINVAL;
nbd_send_req(lo, &sreq);
- return 0 ;
+ return 0;

case NBD_CLEAR_SOCK:
+ error = 0;
+ down(&lo->tx_lock);
+ lo->sock = NULL;
+ up(&lo->tx_lock);
+ spin_lock(&lo->queue_lock);
+ file = lo->file;
+ lo->file = NULL;
+ spin_unlock(&lo->queue_lock);
nbd_clear_que(lo);
spin_lock(&lo->queue_lock);
if (!list_empty(&lo->queue_head)) {
- spin_unlock(&lo->queue_lock);
- printk(KERN_ERR "%s: Some requests are in progress -> can not turn off.\n",
- lo->disk->disk_name);
- return -EBUSY;
+ printk(KERN_ERR "nbd: disconnect: some requests are in progress -> please try again.\n");
+ error = -EBUSY;
}
- file = lo->file;
- if (!file) {
- spin_unlock(&lo->queue_lock);
- return -EINVAL;
- }
- lo->file = NULL;
- lo->sock = NULL;
spin_unlock(&lo->queue_lock);
- fput(file);
- return 0;
+ if (file)
+ fput(file);
+ return error;
case NBD_SET_SOCK:
if (lo->file)
return -EBUSY;
@@ -616,10 +627,13 @@ static int nbd_ioctl(struct inode *inode
* there should be a more generic interface rather than
* calling socket ops directly here */
down(&lo->tx_lock);
- printk(KERN_WARNING "%s: shutting down socket\n",
+ if (lo->sock) {
+ printk(KERN_WARNING "%s: shutting down socket\n",
lo->disk->disk_name);
- lo->sock->ops->shutdown(lo->sock, SEND_SHUTDOWN|RCV_SHUTDOWN);
- lo->sock = NULL;
+ lo->sock->ops->shutdown(lo->sock,
+ SEND_SHUTDOWN|RCV_SHUTDOWN);
+ lo->sock = NULL;
+ }
up(&lo->tx_lock);
spin_lock(&lo->queue_lock);
file = lo->file;
\
 
 \ /
  Last update: 2005-03-22 13:47    [W:0.063 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site