lkml.org 
[lkml]   [2010]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [COUNTERPATCH] mm: avoid overflowing preempt_count() in mmu_take_all_locks()
From
Date
On Thu, 2010-04-01 at 14:17 +0300, Avi Kivity wrote:
> On 04/01/2010 02:13 PM, Avi Kivity wrote:
> >
> >> Anyway, I don't see a reason why we can't convert those locks to
> >> mutexes and get rid of the whole preempt disabled region.
> >
> > If someone is willing to audit all code paths to make sure these locks
> > are always taken in schedulable context I agree that's a better fix.
> >
>
> From mm/rmap.c:
>
> > /*
> > * Lock ordering in mm:
> > *
> > * inode->i_mutex (while writing or truncating, not reading or
> > faulting)
> > * inode->i_alloc_sem (vmtruncate_range)
> > * mm->mmap_sem
> > * page->flags PG_locked (lock_page)
> > * mapping->i_mmap_lock
> > * anon_vma->lock
> ...
> > *
> > * (code doesn't rely on that order so it could be switched around)
> > * ->tasklist_lock
> > * anon_vma->lock (memory_failure, collect_procs_anon)
> > * pte map lock
> > */
>
> i_mmap_lock is a spinlock, and tasklist_lock is a rwlock, so some
> changes will be needed.

i_mmap_lock will need to change just as well, mm_take_all_locks() uses
both anon_vma->lock and mapping->i_mmap_lock.

I've almost got a patch done that converts those two, still need to look
where that tasklist_lock muck happens.





\
 
 \ /
  Last update: 2010-04-01 13:31    [W:0.181 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site