lkml.org 
[lkml]   [2011]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectPerfromance drop on SCSI hard disk
From
Date
commit c21e6beba8835d09bb80e34961 removed the REENTER flag and changed
scsi_run_queue() to punt all requests on starved_list devices to
kblockd. Yes, like Jens mentioned, the performance on slow SCSI disk was
hurt here. :) (Intel SSD isn't effected here)

In our testing on 12 SAS disk JBD, the fio write with sync ioengine drop
about 30~40% throughput, fio randread/randwrite with aio ioengine drop
about 20%/50% throughput. and fio mmap testing was hurt also.

With the following debug patch, the performance can be totally recovered
in our testing. But without REENTER flag here, in some corner case, like
a device is keeping blocked and then unblocked repeatedly,
__blk_run_queue() may recursively call scsi_run_queue() and then cause
kernel stack overflow.
I don't know details of block device driver, just wondering why on scsi
need the REENTER flag here. :)

James, do you have some idea on this.

Regards
Alex
======
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index e9901b8..24e8589 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -432,8 +432,11 @@ static void scsi_run_queue(struct request_queue *q)
&shost->starved_list);
continue;
}
-
- blk_run_queue_async(sdev->request_queue);
+ spin_unlock(shost->host_lock);
+ spin_lock(sdev->request_queue->queue_lock);
+ __blk_run_queue(sdev->request_queue);
+ spin_unlock(sdev->request_queue->queue_lock);
+ spin_lock(shost->host_lock);
}
/* put any unprocessed entries back */
list_splice(&starved_list, &shost->starved_list);








\
 
 \ /
  Last update: 2011-05-10 08:43    [W:0.101 / U:1.396 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site