Messages in this thread |  | | Subject | [PATCH] Make same-process recursive mm_struct access possible | Date | Thu, 20 Sep 2001 16:24:35 +0100 | From | David Howells <> |
| |
Based on an idea by Andrea Arcangeli, this patch wraps the rw_semaphore on the mm_struct (to force you to use special locking operations to access it) and adds a counter to the task_struct (which counts the number of active read locks you hold on _your_ mm_struct). The four new ops are:
mm_lock_shared(struct mm_struct *); mm_unlock_shared(struct mm_struct *); mm_lock_exclusive(struct mm_struct *); mm_unlock_exclusive(struct mm_struct *);
* The exclusive ops map directly to down_write() and up_write().
* The unlock-shared op decrements the current mm readlock counter if the mm unlocked is the current mm; finally it calls up_read().
* The lock-shared op does one of the following two things:
* If the mm being locked is the current mm, and if the current readlock counter is greater than zero, then it calls down_read_recursive() and then increments the current readlock counter.
* Otherwise, it just calls down_read() and then increments the current readlock counter.
down_read_recursive() is a new rw-semaphore operation. It simply adjusts the semaphore to record an extra active readlock. As specified in the comments associated with it, it MUST ONLY be called if you know that there is already an active read lock on the semaphore. Note that it never needs to wait; given the preconditions for calling it, it knows it can always succeed immediately.
This patch makes no changes to existing rw-semaphore operations. They are still fair. It simply makes mm_struct locks fair and capable of recursion.
David
[unhandled content-type:application/octet-stream] |  |