lkml.org 
[lkml]   [2009]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] [PATCH 1/5] cgroups: revamp subsys array
Ben Blum wrote:
> On Tue, Dec 08, 2009 at 03:38:43PM +0800, Li Zefan wrote:
>>> @@ -1291,6 +1324,7 @@ static int cgroup_get_sb(struct file_system_type *fs_type,
>>> struct cgroupfs_root *new_root;
>>>
>>> /* First find the desired set of subsystems */
>>> + down_read(&subsys_mutex);
>> Hmm.. this can lead to deadlock. sget() returns success with sb->s_umount
>> held, so here we have:
>>
>> down_read(&subsys_mutex);
>>
>> down_write(&sb->s_umount);
>>
>> On the other hand, sb->s_umount is held before calling kill_sb(),
>> so when umounting we have:
>>
>> down_write(&sb->s_umount);
>>
>> down_read(&subsys_mutex);
>
> Unless I'm gravely mistaken, you can't have deadlock on an rwsem when
> it's being taken for reading in both cases? You would have to have at
> least one of the cases being down_write.
>

lockdep will warn on this..

And it can really lead to deadlock, though not so obivously:

thread 1 thread 2 thread 3
-------------------------------------------
| read(A) write(B)
|
| write(A)
|
| read(A)
|
| write(B)
|

t3 is waiting for t1 to release the lock, then t2 tries to
acquire A lock to read, but it has to wait because of t3,
and t1 has to wait t2.

Note: a read lock has to wait if a write lock is already
waiting for the lock.

> In fairness to readability, perhaps subsys_mutex should instead be
> subsys_rwsem? It seemed to me to be that calling it "mutex" was
> conventional anyway.
>




\
 
 \ /
  Last update: 2009-12-09 07:19    [W:0.753 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site