lkml.org 
[lkml]   [2008]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -mm] vmscan: bail out of page reclaim after swap_cluster_max pages
On Sun, 16 Nov 2008 16:38:56 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> One more point.
>
> > Sometimes the VM spends the first few priority rounds rotating back
> > referenced pages and submitting IO. Once we get to a lower priority,
> > sometimes the VM ends up freeing way too many pages.
> >
> > The fix is relatively simple: in shrink_zone() we can check how many
> > pages we have already freed and break out of the loop.
> >
> > However, in order to do this we do need to know how many pages we already
> > freed, so move nr_reclaimed into scan_control.
>
> IIRC, Balbir-san explained the implemetation of the memcgroup
> force cache dropping feature need non bail out at the past reclaim
> throttring discussion.
>
> I am not sure about this still right or not (iirc, memcgroup implemetation
> was largely changed).
>
> Balbir-san, Could you comment to this patch?
>
>
I'm not Balbir-san but there is no "force-cache-dropping" feature now.
(I have no plan to do that.)

But, mem+swap controller will need to modify reclaim path to do "cache drop
first" becasue the amount of "mem+swap" will not change when "mem+swap" hit
limit. It's now set "sc.may_swap" to 0.

Hmm, I hope memcg is a silver bullet to this kind of special? workload in
long term.


Thanks,
-Kame





\
 
 \ /
  Last update: 2008-11-17 01:41    [W:0.088 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site