lkml.org 
[lkml]   [2014]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] locking/mutexes: Revert "locking/mutexes: Add extra reschedule point"
From
On Thu, Jul 31, 2014 at 5:13 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Thu, Jul 31, 2014 at 04:37:29PM +0400, Ilya Dryomov wrote:
>
>> This didn't make sense to me at first too, and I'll be happy to be
>> proven wrong, but we can reproduce this with rbd very reliably under
>> higher than usual load, and the revert makes it go away. What we are
>> seeing in the rbd scenario is the following.
>
> This is drivers/block/rbd.c ? I can find but a single mutex_lock() in
> there.

This is in net/ceph, include/linux/ceph.

Mutex A - struct ceph_osd_client::request_mutex, taken in alloc_msg(),
handle_timeout(), handle_osds_timeout(), ceph_osdc_start_request().

Mutex B - struct ceph_connection::mutex, taken in ceph_con_send().

dmesg with a sample dump of blocked tasks attached.

Basically everybody except kjournald:4398 is waiting for request_mutex,
which kjournald acquired in ceph_osdc_start_request(). kjournald
however itself sits waiting for ceph_connection::mutex, even though it
has been released.

>> Suppose foo needs mutexes A and B, bar needs mutex B. foo acquires
>> A and then wants to acquire B, but B is held by bar. foo spins
>> a little and ends up calling schedule_preempt_disabled() on line 484
>> above, but that call never returns, even though a hundred usecs later
>> bar releases B. foo ends up stuck in mutex_lock() indefinitely, but
>> still holds A and everybody else who needs A gets behind A. Given that
>> this A happens to be a central libceph mutex all rbd activity halts.
>> Deadlock may not be the best term for this, but never returning from
>> mutex_lock(&B) even though B has been unlocked is *a* problem.
>>
>> This obviously doesn't happen every time schedule_preempt_disabled() on
>> line 484 is called, so there must be some sort of race here. I'll send
>> along the actual rbd stack traces shortly.
>
> Smells like maybe current->state != TASK_RUNNING, does the below
> trigger?
>
> If so, you've wrecked something in whatever...

Trying it now.

Thanks,

Ilya
[unhandled content-type:application/octet-stream]
\
 
 \ /
  Last update: 2014-07-31 16:01    [W:0.100 / U:0.380 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site