lkml.org 
[lkml]   [2017]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm,oom: Try last second allocation before and after selecting an OOM victim.
On Wed 01-11-17 20:58:50, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 31-10-17 22:51:49, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > On Tue 31-10-17 22:13:05, Tetsuo Handa wrote:
> > > > > Michal Hocko wrote:
> > > > > > On Tue 31-10-17 21:42:23, Tetsuo Handa wrote:
> > > > > > > > While both have some merit, the first reason is mostly historical
> > > > > > > > because we have the explicit locking now and it is really unlikely that
> > > > > > > > the memory would be available right after we have given up trying.
> > > > > > > > Last attempt allocation makes some sense of course but considering that
> > > > > > > > the oom victim selection is quite an expensive operation which can take
> > > > > > > > a considerable amount of time it makes much more sense to retry the
> > > > > > > > allocation after the most expensive part rather than before. Therefore
> > > > > > > > move the last attempt right before we are trying to kill an oom victim
> > > > > > > > to rule potential races when somebody could have freed a lot of memory
> > > > > > > > in the meantime. This will reduce the time window for potentially
> > > > > > > > pre-mature OOM killing considerably.
> > > > > > >
> > > > > > > But this is about "doing last second allocation attempt after selecting
> > > > > > > an OOM victim". This is not about "allowing OOM victims to try ALLOC_OOM
> > > > > > > before selecting next OOM victim" which is the actual problem I'm trying
> > > > > > > to deal with.
> > > > > >
> > > > > > then split it into two. First make the general case and then add a more
> > > > > > sophisticated on top. Dealing with multiple issues at once is what makes
> > > > > > all those brain cells suffer.
> > > > >
> > > > > I'm failing to understand. I was dealing with single issue at once.
> > > > > The single issue is "MMF_OOM_SKIP prematurely prevents OOM victims from trying
> > > > > ALLOC_OOM before selecting next OOM victims". Then, what are the general case and
> > > > > a more sophisticated? I wonder what other than "MMF_OOM_SKIP should allow OOM
> > > > > victims to try ALLOC_OOM for once before selecting next OOM victims" can exist...
> > > >
> > > > Try to think little bit out of your very specific and borderline usecase
> > > > and it will become obvious. ALLOC_OOM is a trivial update on top of
> > > > moving get_page_from_freelist to oom_kill_process which is a more
> > > > generic race window reducer.
> > >
> > > So, you meant "doing last second allocation attempt after selecting an OOM victim"
> > > as the general case and "using ALLOC_OOM at last second allocation attempt" as a
> > > more sophisticated. Then, you won't object conditionally switching ALLOC_WMARK_HIGH
> > > and ALLOC_OOM for last second allocation attempt, will you?
> >
> > yes for oom_victims
>
> OK.
>
> >
> > > But doing ALLOC_OOM for last second allocation attempt from out_of_memory() involve
> > > duplicating code (e.g. rebuilding zone list).
> >
> > Why would you do it? Do not blindly copy and paste code without
> > a good reason. What kind of problem does this actually solve?
>
> prepare_alloc_pages()/finalise_ac() initializes as
>
> ac->high_zoneidx = gfp_zone(gfp_mask);
> ac->zonelist = node_zonelist(preferred_nid, gfp_mask);
> ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> ac->high_zoneidx, ac->nodemask);
>
> and selecting as an OOM victim reinitializes as
>
> ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
> ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
> ac->high_zoneidx, ac->nodemask);
>
> and I assume that this reinitialization might affect which memory reserve
> the OOM victim allocates from.
>
> You mean such difference is too trivial to care about?

You keep repeating what the _current_ code does without explaining _why_
do we need the same thing in the oom path. Could you finaly answer my
question please?

> > > What is your preferred approach?
> > > Duplicate relevant code? Use get_page_from_freelist() without rebuilding the zone list?
> > > Use __alloc_pages_nodemask() ?
> >
> > Just do what we do now with ALLOC_WMARK_HIGH and in a separate patch use
> > ALLOC_OOM for oom victims. There shouldn't be any reasons to play
> > additional tricks here.
> >
>
> Posted as http://lkml.kernel.org/r/1509537268-4726-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp .
>
> But I'm still unable to understand why "moving get_page_from_freelist to
> oom_kill_process" is better than "copying get_page_from_freelist to
> oom_kill_process", for "moving" increases possibility of allocation failures
> when out_of_memory() is not called.

The changelog I have provided to you should answer that. It is highly
unlikely there high wmark would succeed _right_ after we have just given
up. If this assumption is not correct then we can _add_ such a call
based on a real data rather than add more bloat "just because we used to
do that". As I've said I completely hate the cargo cult programming. Do
not add more.

> Also, I'm still unable to understand why
> to use ALLOC_WMARK_HIGH. I think that using regular watermark for last second
> allocation attempt is better (as described below).

If you believe that a standard wmark is sufficient then make it a
separate patch with the full explanation why.

> __alloc_pages_may_oom() is doing last second allocation attempt using
> ALLOC_WMARK_HIGH before calling out_of_memory(). This has two motivations.
> The first one is explained by the comment that it aims to catch potential
> parallel OOM killing and the second one was explained by Andrea Arcangeli
> as follows:
> : Elaborating the comment: the reason for the high wmark is to reduce
> : the likelihood of livelocks and be sure to invoke the OOM killer, if
> : we're still under pressure and reclaim just failed. The high wmark is
> : used to be sure the failure of reclaim isn't going to be ignored. If
> : using the min wmark like you propose there's risk of livelock or
> : anyway of delayed OOM killer invocation.
>
> But neither motivation applies to current code. Regarding the former,
> there is no parallel OOM killing (in the sense that out_of_memory() is
> called "concurrently") because we serialize out_of_memory() calls using
> oom_lock. Regarding the latter, there is no possibility of OOM livelocks
> nor possibility of failing to invoke the OOM killer because we mask
> __GFP_DIRECT_RECLAIM for last second allocation attempt because oom_lock
> prevents __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocations which last
> second allocation attempt depends on from failing.

Read that comment again. I believe you have misunderstood it. It is not
about gfp flags at all. It is that we simply never invoke oom killer
just because of small allocation fluctuations.

[...]
> Thread1 Thread2 Thread3
>
> Enters __alloc_pages_may_oom().
> Enters __alloc_pages_may_oom().
> Enters __alloc_pages_may_oom().
> Preempted by somebody else.
> Preempted by somebody else.
> mutex_trylock(&oom_lock) succeeds.
> get_page_from_freelist(ALLOC_WMARK_HIGH) fails. And get_page_from_freelist(ALLOC_WMARK_MIN) would have failed.
> Calls out_of_memory() and kills a not-such-memhog victim.
> Calls mutex_unlock(&oom_lock)
> Returns from preemption.
> mutex_trylock(&oom_lock) succeeds.
> get_page_from_freelist(ALLOC_WMARK_HIGH) fails. But get_page_from_freelist(ALLOC_WMARK_MIN) would have succeeded.
> Calls out_of_memory() and kills next not-such-memhog victim.
> Calls mutex_unlock(&oom_lock)
> Returns from preemption.
> mutex_trylock(&oom_lock) succeeds.
> get_page_from_freelist(ALLOC_WMARK_HIGH) fails. But get_page_from_freelist(ALLOC_WMARK_MIN) would have succeeded.
> Calls out_of_memory() and kills next not-such-memhog victim.
> Calls mutex_unlock(&oom_lock)
>
> and Thread1/Thread2 did not need to OOM-kill if ALLOC_WMARK_MIN were used.
> When we hit sequence like above, using ALLOC_WMARK_HIGH for last second allocation
> attempt unlikely helps avoiding potential parallel OOM killing. Rather, using
> ALLOC_WMARK_MIN likely helps avoiding potential parallel OOM killing.

I am not sure such a scenario matters all that much because it assumes
that the oom victim doesn't really free much memory [1] (basically less than
HIGH-MIN). Most OOM situation simply have a memory hog consuming
significant amount of memory. Sure you can construct a workload which
spans many zones (especially on NUMA systems with many nodes/zones) but
can we focus on reasonable workloads rather than overcomplicate things
without a good reasons because "we can screw up systems so many
different ways"? In other words, please be reasonable...

[1] Take this as an example
Node 0, zone Normal
pages free 78348
min 11522
low 14402
high 17282
spanned 1368064
present 1368064
managed 1332983

which is a 5GB zone where high-min is ~20MB.
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-11-01 13:46    [W:0.058 / U:0.952 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site