lkml.org 
[lkml]   [2016]   [Mar]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/3] OOM detection rework v4
On Thu, Mar 03, 2016 at 04:50:16PM +0100, Vlastimil Babka wrote:
> On 03/03/2016 03:10 PM, Joonsoo Kim wrote:
> >
> >> [...]
> >>>>> At least, reset no_progress_loops when did_some_progress. High
> >>>>> order allocation up to PAGE_ALLOC_COSTLY_ORDER is as important
> >>>>> as order 0. And, reclaim something would increase probability of
> >>>>> compaction success.
> >>>>
> >>>> This is something I still do not understand. Why would reclaiming
> >>>> random order-0 pages help compaction? Could you clarify this please?
> >>>
> >>> I just can tell simple version. Please check the link from me on another reply.
> >>> Compaction could scan more range of memory if we have more freepage.
> >>> This is due to algorithm limitation. Anyway, so, reclaiming random
> >>> order-0 pages helps compaction.
> >>
> >> I will have a look at that code but this just doesn't make any sense.
> >> The compaction should be reshuffling pages, this shouldn't be a function
> >> of free memory.
> >
> > Please refer the link I mentioned before. There is a reason why more free
> > memory would help compaction success. Compaction doesn't work
> > like as random reshuffling. It has an algorithm to reduce system overall
> > fragmentation so there is limitation.
>
> I proposed another way to get better results from direct compaction -
> don't scan for free pages but get them directly from freelists:
>
> https://lkml.org/lkml/2015/12/3/60
>

I think that major problem of this approach is that there is no way
to prevent other parallel compacting thread from taking freepage on
targetted aligned block. So, if there are parallel compaction requestors,
they would disturb each others. However, it would not be a problem for order
up to PAGE_ALLOC_COSTLY_ORDER which would be finished so soon.

In fact, for quick allocation, migration scanner is also unnecessary.
There would be a lot of pageblock we cannot do migration. Scanning
all of them in this situation is unnecessary and costly. Moreover, scanning
only half of zone due to limitation of compaction algorithm also looks
not good. Instead, we can get base page on lru list and migrate
neighborhood pages. I named this idea as "lumpy compaction" but didn't
try it. If we only focus on quick allocation, this would be a better way.
Any thought?

Thanks.

\
 
 \ /
  Last update: 2016-03-04 09:01    [W:0.376 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site