lkml.org 
[lkml]   [2016]   [Mar]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] OOM detection rework v4
On Fri, Mar 04, 2016 at 04:15:58PM +0100, Michal Hocko wrote:
> On Fri 04-03-16 14:23:27, Joonsoo Kim wrote:
> > On Thu, Mar 03, 2016 at 04:25:15PM +0100, Michal Hocko wrote:
> > > On Thu 03-03-16 23:10:09, Joonsoo Kim wrote:
> > > > 2016-03-03 18:26 GMT+09:00 Michal Hocko <mhocko@kernel.org>:
> [...]
> > > > >> I guess that usual case for high order allocation failure has enough freepage.
> > > > >
> > > > > Not sure I understand you mean here but I wouldn't be surprised if high
> > > > > order failed even with enough free pages. And that is exactly why I am
> > > > > claiming that reclaiming more pages is no free ticket to high order
> > > > > pages.
> > > >
> > > > I didn't say that it's free ticket. OOM kill would be the most expensive ticket
> > > > that we have. Why do you want to kill something?
> > >
> > > Because all the attempts so far have failed and we should rather not
> > > retry endlessly. With the band-aid we know we will retry
> > > MAX_RECLAIM_RETRIES at most. So compaction had that many attempts to
> > > resolve the situation along with the same amount of reclaim rounds to
> > > help and get over watermarks.
> > >
> > > > It also doesn't guarantee to make high order pages. It is just another
> > > > way of reclaiming memory. What is the difference between plain reclaim
> > > > and OOM kill? Why do we use OOM kill in this case?
> > >
> > > What is our alternative other than keep looping endlessly?
> >
> > Loop as long as free memory or estimated available memory (free +
> > reclaimable) increases. This means that we did some progress. And,
> > they will not grow forever because we have just limited reclaimable
> > memory and limited memory. You can reset no_progress_loops = 0 when
> > those metric increases than before.
>
> Hmm, why is this any better than taking the feedback from the reclaim
> (did_some_progress)?

My suggestion could be only applied to high order case. In this case,
free page and reclaimable page is already sufficient and parallel
free page consumer would re-generate reclaimable page endlessly so
positive did_some_progress will be returned endlessy. We need to stop
retry at some point so we need some metric that ensures finite retry
in any case.

>
> > With this bound, we can do our best to try to solve this unpleasant
> > situation before OOM.
> >
> > Unconditional 16 looping and then OOM kill really doesn't make any
> > sense, because it doesn't mean that we already do our best.
>
> 16 is not really that important. We can change that if that doesn't
> sounds sufficient. But please note that each reclaim round means
> that we have scanned all eligible LRUs to find and reclaim something
> and asked direct compaction to prepare a high order page.
> This sounds like "do our best" to me.

AFAIK, each reclaim round doesn't reclaim all reclaimable page. It has
a limit to reclaim. It looks not our best to me and N retry only
multipies that limit N times. It also doesn't look like our best to
me and will lead to premature OOM kill.

> Now it seems that we need more changes at least in the compaction area
> because the code doesn't seem to fit the nature of !costly allocation
> requests. I am also not satisfied with the fixed MAX_RECLAIM_RETRIES for
> high order pages, I would much rather see some feedback mechanism which
> would measurable and evaluated in some way but is this really necessary
> for the initial version?

I don't know. My analysis is just based on my guess and background knowledge,
not practical usecase, so I'm not sure it is necessary for the initial
version or not. It's up to you.

Thanks.

\
 
 \ /
  Last update: 2016-03-07 07:02    [W:0.120 / U:1.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site