lkml.org 
[lkml]   [2019]   [Aug]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH RFC] mm/memcontrol: reclaim severe usage over high limit in get_user_pages loop
    On Mon 05-08-19 20:28:40, Yang Shi wrote:
    > On Mon, Aug 5, 2019 at 7:32 AM Michal Hocko <mhocko@kernel.org> wrote:
    > >
    > > On Fri 02-08-19 11:56:28, Yang Shi wrote:
    > > > On Fri, Aug 2, 2019 at 2:35 AM Michal Hocko <mhocko@kernel.org> wrote:
    > > > >
    > > > > On Thu 01-08-19 14:00:51, Yang Shi wrote:
    > > > > > On Mon, Jul 29, 2019 at 11:48 AM Michal Hocko <mhocko@kernel.org> wrote:
    > > > > > >
    > > > > > > On Mon 29-07-19 10:28:43, Yang Shi wrote:
    > > > > > > [...]
    > > > > > > > I don't worry too much about scale since the scale issue is not unique
    > > > > > > > to background reclaim, direct reclaim may run into the same problem.
    > > > > > >
    > > > > > > Just to clarify. By scaling problem I mean 1:1 kswapd thread to memcg.
    > > > > > > You can have thousands of memcgs and I do not think we really do want
    > > > > > > to create one kswapd for each. Once we have a kswapd thread pool then we
    > > > > > > get into a tricky land where a determinism/fairness would be non trivial
    > > > > > > to achieve. Direct reclaim, on the other hand is bound by the workload
    > > > > > > itself.
    > > > > >
    > > > > > Yes, I agree thread pool would introduce more latency than dedicated
    > > > > > kswapd thread. But, it looks not that bad in our test. When memory
    > > > > > allocation is fast, even though dedicated kswapd thread can't catch
    > > > > > up. So, such background reclaim is best effort, not guaranteed.
    > > > > >
    > > > > > I don't quite get what you mean about fairness. Do you mean they may
    > > > > > spend excessive cpu time then cause other processes starvation? I
    > > > > > think this could be mitigated by properly organizing and setting
    > > > > > groups. But, I agree this is tricky.
    > > > >
    > > > > No, I meant that the cost of reclaiming a unit of charges (e.g.
    > > > > SWAP_CLUSTER_MAX) is not constant and depends on the state of the memory
    > > > > on LRUs. Therefore any thread pool mechanism would lead to unfair
    > > > > reclaim and non-deterministic behavior.
    > > >
    > > > Yes, the cost depends on the state of pages, but I still don't quite
    > > > understand what does "unfair" refer to in this context. Do you mean
    > > > some cgroups may reclaim much more than others?
    > >
    > > > Or the work may take too long so it can't not serve other cgroups in time?
    > >
    > > exactly.
    >
    > Actually, I'm not very concerned by this. In our design each memcg has
    > its dedicated work (memcg->wmark_work), so the reclaim work for
    > different memcgs could be run in parallel since they are *different*
    > work in fact although they run the same function. And, We could queue
    > them to a dedicated unbound workqueue which may have maximum 512 or
    > scale with nr cpus active works. Although the system may have
    > thousands of online memcgs, I'm supposed it should be rare to have all
    > of them trigger reclaim at the same time.

    I do believe that it might work for your particular usecase but I do not
    think this is robust enough for the upstream kernel, I am afraid.

    As I've said I am open to discuss an opt-in per memcg pro-active reclaim
    (a kernel thread that belongs to the memcg) but it has to be a dedicated
    worker bound by all the cgroup resource restrictions.

    --
    Michal Hocko
    SUSE Labs

    \
     
     \ /
      Last update: 2019-08-06 09:10    [W:4.413 / U:0.248 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site