lkml.org 
[lkml]   [2010]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] RFC: vmscan: add min_filelist_kbytes sysctl for protecting the working set
From
On Wed, Nov 3, 2010 at 11:00 AM, Rik van Riel <riel@redhat.com> wrote:
> On 11/02/2010 08:48 PM, Minchan Kim wrote:
>
>>> I wonder if a possible solution would be to limit how fast
>>> file pages get reclaimed, when the page cache is very small.
>>> Say, inactive_file * active_file<  2 * zone->pages_high ?
>>
>> Why do you multiply inactive_file and active_file?
>> What's meaning?
>
> That was a stupid typo, it should have been a + :)
>
>> I think it's very difficult to fix _a_ threshold.
>> At least, user have to set it with proper value to use the feature.
>> Anyway, we need default value. It needs some experiments in desktop
>> and embedded.
>
> Yes, setting a threshold will be difficult.  However,
> if the behaviour below that threshold is harmless to
> pretty much any workload, it doesn't matter a whole
> lot where we set it...

Okay. But I doubt we could make the default value with effective when
we really need the function.
Maybe whenever user uses the feature, he have to tweak the knob.

>
>>> At that point, maybe we could slow down the reclaiming of
>>> page cache pages to be significantly slower than they can
>>> be refilled by the disk.  Maybe 100 pages a second - that
>>> can be refilled even by an actual spinning metal disk
>>> without even the use of readahead.
>>>
>>> That can be rounded up to one batch of SWAP_CLUSTER_MAX
>>> file pages every 1/4 second, when the number of page cache
>>> pages is very low.
>>
>> How about reducing scanning window size?
>> I think it could approximate the idea.
>
> A good idea in principle, but if it results in the VM
> simply calling the pageout code more often, I suspect
> it will not have any effect.
>
> Your patch looks like it would have that effect.


It could.
But time based approach would be same, IMHO.
First of all, I don't want long latency of direct reclaim process.
It could affect response of foreground process directly.

If VM limits the number of pages reclaimed per second, direct reclaim
process's latency will be affected. so we should avoid throttling in
direct reclaim path. Agree?

So, for slow down reclaim pages in kswapd, there will be processes
enter direct relcaim. So it results in the VM simply calling the
pageout code more often.

If I misunderstood way to implement your idea, please let me know it.

>
> I suspect we will need a time-based approach to really
> protect the last bits of page cache in a near-OOM
> situation.
>
>>> Would there be any downsides to this approach?
>>
>> At first feeling, I have a concern unbalance aging of anon/file.
>> But I think it's no problem. It a result user want. User want to
>> protect file-backed page(ex, code page) so many anon swapout is
>> natural result to go on the system. If the system has no swap, we have
>> no choice except OOM.
>
> We already have an unbalance in aging anon and file
> pages, several of which are introduced on purpose.
>
> In this proposal, there would only be an imbalance
> if the number of file pages is really low.

Right.

>
> --
> All rights reversed
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-11-03 04:05    [W:0.279 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site