[lkml]   [2010]   [Jul]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH]shmem: reduce one time of locking in pagefault
    On Wed, 07 Jul 2010 09:15:46 +0800 Shaohua Li <> wrote:

    > I'm running a shmem pagefault test case (see attached file) under a 64 CPU
    > system. Profile shows shmem_inode_info->lock is heavily contented and 100%
    > CPUs time are trying to get the lock.

    I seem to remember complaining about that in 2002 ;) Faulting in a
    mapping of /dev/zero is just awful on a 4-way(!).

    > In the pagefault (no swap) case,
    > shmem_getpage gets the lock twice, the last one is avoidable if we prealloc a
    > page so we could reduce one time of locking. This is what below patch does.
    > The result of the test case:
    > 2.6.35-rc3: ~20s
    > 2.6.35-rc3 + patch: ~12s
    > so this is 40% improvement.
    > One might argue if we could have better locking for shmem. But even shmem is lockless,
    > the pagefault will soon have pagecache lock heavily contented because shmem must add
    > new page to pagecache. So before we have better locking for pagecache, improving shmem
    > locking doesn't have too much improvement. I did a similar pagefault test against
    > a ramfs file, the test result is ~10.5s.
    > Signed-off-by: Shaohua Li <>
    > diff --git a/mm/shmem.c b/mm/shmem.c
    > index f65f840..c5f2939 100644
    > --- a/mm/shmem.c
    > +++ b/mm/shmem.c

    The patch doesn't make shmem_getpage() any clearer :(

    shmem_inode_info.lock appears to be held too much. Surely
    lookup_swap_cache() didn't need it (for example).

    What data does shmem_inode_info.lock actually protect?

     \ /
      Last update: 2010-07-07 03:35    [W:0.029 / U:6.168 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site