lkml.org 
[lkml]   [2001]   [Sep]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: io_request_lock/queue_lock patch

> You are now browsing the request list without agreeing on what lock is
> being held -- what happens to drivers assuming that io_request_lock
> protects the list? Boom. For 2.4 we simply cannot afford to muck around
> with this, it's jsut too dangerous. For 2.5 I already completely removed
> the io_request_lock (also helps to catch references to it from drivers).

In this patch, io_request_lock and queue_lock are both acquired in
generic_unplug_device, so request_fn invocations protect request queue
integrity. __make_request acquires queue_lock instead of io_request_lock
thus protecting queue integrity while allowing greater concurrency.

Nevertheless, I understand your unwillingness to change locking as
pervasive as io_request_lock. Such changes would of course involve
risk. I am simply trying to improve 2.4 i/o performance, since 2.4
could have a long time left to live.

> I agree with your SCSI approach, it's the same we took. Low level
> drivers must be responsible for their own locking, the mid layer should
> not pre-grab anything for them.

Yes, calling out of subsystem scope with locks held can cause problems.

Thanks for your feedback.

Jonathan

--
Jonathan Lahr
IBM Linux Technology Center
Beaverton, Oregon
lahr@us.ibm.com
503-578-3385

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:02    [W:0.182 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site