Messages in this thread | | | Subject | Re: [COUNTERPATCH] mm: avoid overflowing preempt_count() in mmu_take_all_locks() | From | Peter Zijlstra <> | Date | Thu, 01 Apr 2010 13:27:44 +0200 |
| |
On Thu, 2010-04-01 at 14:17 +0300, Avi Kivity wrote: > On 04/01/2010 02:13 PM, Avi Kivity wrote: > > > >> Anyway, I don't see a reason why we can't convert those locks to > >> mutexes and get rid of the whole preempt disabled region. > > > > If someone is willing to audit all code paths to make sure these locks > > are always taken in schedulable context I agree that's a better fix. > > > > From mm/rmap.c: > > > /* > > * Lock ordering in mm: > > * > > * inode->i_mutex (while writing or truncating, not reading or > > faulting) > > * inode->i_alloc_sem (vmtruncate_range) > > * mm->mmap_sem > > * page->flags PG_locked (lock_page) > > * mapping->i_mmap_lock > > * anon_vma->lock > ... > > * > > * (code doesn't rely on that order so it could be switched around) > > * ->tasklist_lock > > * anon_vma->lock (memory_failure, collect_procs_anon) > > * pte map lock > > */ > > i_mmap_lock is a spinlock, and tasklist_lock is a rwlock, so some > changes will be needed.
i_mmap_lock will need to change just as well, mm_take_all_locks() uses both anon_vma->lock and mapping->i_mmap_lock.
I've almost got a patch done that converts those two, still need to look where that tasklist_lock muck happens.
| |