Messages in this thread | | | From | KOSAKI Motohiro <> | Subject | Re: Subject: [RFC MM] mmap_sem scaling: Use mutex and percpu counter instead | Date | Tue, 10 Nov 2009 15:21:11 +0900 (JST) |
| |
> On Fri, 6 Nov 2009, Andi Kleen wrote: > > > On Fri, Nov 06, 2009 at 12:08:54PM -0500, Christoph Lameter wrote: > > > On Fri, 6 Nov 2009, Andi Kleen wrote: > > > > > > > Yes but all the major calls still take mmap_sem, which is not ranged. > > > > > > But exactly that issue is addressed by this patch! > > > > Major calls = mmap, brk, etc. > > Those are rare. More frequently are for faults, get_user_pages and > the like operations that are frequent. > > brk depends on process wide settings and has to be > serialized using a processor wide locks. > > mmap and other address space local modification may be able to avoid > taking mmap write lock by taking the read lock and then locking the > ptls in the page struct relevant to the address space being modified. > > This is also enabled by this patchset.
Andi, Why do you ignore fork? fork() hold mmap_sem write-side lock and it is one of critical path. Ah yes, I know HPC workload doesn't call fork() so frequently, I mean typical desktop and small server case.
I agree with cristoph halfly. if the issue is only in mmap, it isn't so important.
Probably, I haven't catch your mention.
Plus, most critical mmap_sem issue is not locking cost itself. In stree workload, the procss grabbing mmap_sem frequently sleep. and fair rw-semaphoe logic frequently prevent reader side locking. At least, this improvement doesn't help google like workload.
Thanks.
> > Only for page faults, not for anything that takes it for write. > > > > Anyways the better reader lock is a step in the right direction, but > > I have my doubts it's a good idea to make write really slow here. > > The bigger the system the larger the problems with mmap. This is one key > scaling issue important for the VM. We can work on that. I have a patch > here that restricts the per cpu checks to only those cpus on which the > process has at some times run before.
| |