lkml.org 
[lkml]   [2017]   [Apr]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v2 5/5] mm/vmalloc: Don't spawn workers if somebody already purging
    Date
    Don't schedule purge_vmap_work if mutex_is_locked(&vmap_purge_lock),
    as this means that purging is already running in another thread.
    There is no point to schedule extra purge_vmap_work if somebody
    is already purging for us, because that extra work will not do anything
    useful.

    To evaluate performance impact of this change test that calls
    fork() 100 000 times on the kernel with enabled CONFIG_VMAP_STACK=y
    and NR_CACHED_STACK changed to 0 (so that each fork()/exit() executes
    vmalloc()/vfree() call) was used.

    Commands:
    ~ # grep try_purge /proc/kallsyms
    ffffffff811d0dd0 t try_purge_vmap_area_lazy

    ~ # perf stat --repeat 10 -ae workqueue:workqueue_queue_work \
    --filter 'function == 0xffffffff811d0dd0' ./fork

    gave me the following results:

    before:
    30 workqueue:workqueue_queue_work ( +- 1.31% )
    1.613231060 seconds time elapsed ( +- 0.38% )

    after:
    15 workqueue:workqueue_queue_work ( +- 0.88% )
    1.615368474 seconds time elapsed ( +- 0.41% )

    So there is no measurable difference on the performance of the test itself,
    but without the optimization we queue twice more jobs. This should save
    kworkers from doing some useless job.

    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
    Suggested-by: Thomas Hellstrom <thellstrom@vmware.com>
    Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
    ---
    mm/vmalloc.c | 3 ++-
    1 file changed, 2 insertions(+), 1 deletion(-)

    diff --git a/mm/vmalloc.c b/mm/vmalloc.c
    index ee62c0a..1079555 100644
    --- a/mm/vmalloc.c
    +++ b/mm/vmalloc.c
    @@ -737,7 +737,8 @@ static void free_vmap_area_noflush(struct vmap_area *va)
    /* After this point, we may free va at any time */
    llist_add(&va->purge_list, &vmap_purge_list);

    - if (unlikely(nr_lazy > lazy_max_pages()))
    + if (unlikely(nr_lazy > lazy_max_pages()) &&
    + !mutex_is_locked(&vmap_purge_lock))
    schedule_work(&purge_vmap_work);
    }

    --
    2.10.2
    \
     
     \ /
      Last update: 2017-04-12 14:51    [W:4.259 / U:0.560 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site