lkml.org 
[lkml]   [2021]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH] mm: memcg/slab: optimize objcg stock draining
    Date
    Imran Khan reported a regression in hackbench results caused by the
    commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
    instead of pages"). The regression is noticeable in the case of
    a consequent allocation of several relatively large slab objects,
    e.g. skb's. As soon as the amount of stocked bytes exceeds PAGE_SIZE,
    drain_obj_stock() and __memcg_kmem_uncharge() are called, and it leads
    to a number of atomic operations in page_counter_uncharge().

    The corresponding call graph is below (provided by Imran Khan):
    |__alloc_skb
    | |
    | |__kmalloc_reserve.isra.61
    | | |
    | | |__kmalloc_node_track_caller
    | | | |
    | | | |slab_pre_alloc_hook.constprop.88
    | | | obj_cgroup_charge
    | | | | |
    | | | | |__memcg_kmem_charge
    | | | | | |
    | | | | | |page_counter_try_charge
    | | | | |
    | | | | |refill_obj_stock
    | | | | | |
    | | | | | |drain_obj_stock.isra.68
    | | | | | | |
    | | | | | | |__memcg_kmem_uncharge
    | | | | | | | |
    | | | | | | | |page_counter_uncharge
    | | | | | | | | |
    | | | | | | | | |page_counter_cancel
    | | | |
    | | | |
    | | | |__slab_alloc
    | | | | |
    | | | | |___slab_alloc
    | | | | |
    | | | |slab_post_alloc_hook

    Instead of directly uncharging the accounted kernel memory, it's
    possible to refill the generic page-sized per-cpu stock instead.
    It's a much faster operation, especially on a default hierarchy.
    As a bonus, __memcg_kmem_uncharge_page() will also get faster,
    so the freeing of page-sized kernel allocations (e.g. large kmallocs)
    will become faster.

    A similar change has been done earlier for the socket memory by
    the commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for
    socket memory uncharging").

    Signed-off-by: Roman Gushchin <guro@fb.com>
    Reported-by: Imran Khan <imran.f.khan@oracle.com>
    ---
    mm/memcontrol.c | 4 +---
    1 file changed, 1 insertion(+), 3 deletions(-)

    diff --git a/mm/memcontrol.c b/mm/memcontrol.c
    index 0d74b80fa4de..8148c1df3aff 100644
    --- a/mm/memcontrol.c
    +++ b/mm/memcontrol.c
    @@ -3122,9 +3122,7 @@ void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
    if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
    page_counter_uncharge(&memcg->kmem, nr_pages);

    - page_counter_uncharge(&memcg->memory, nr_pages);
    - if (do_memsw_account())
    - page_counter_uncharge(&memcg->memsw, nr_pages);
    + refill_stock(memcg, nr_pages);
    }

    /**
    --
    2.26.2
    \
     
     \ /
      Last update: 2021-01-06 05:24    [W:2.600 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site