lkml.org 
[lkml]   [2017]   [Dec]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 0/5] blkcg: Limit maximum number of aio requests available for cgroup
From
Date
On 06.12.2017 20:44, Benjamin LaHaise wrote:
> On Wed, Dec 06, 2017 at 06:32:56PM +0100, Oleg Nesterov wrote:
>>>> This memory lives in page-cache/lru, it is visible for shrinker which
>>>> will unmap these pages for no reason on memory shortage. IOW, aio fools
>>>> the kernel, this memory looks reclaimable but it is not. And we only do
>>>> this for migration.
>>>
>>> It's the same as any other memory that's mlock()ed into RAM.
>>
>> No. Again, this memory is not properly accounted, and unlike mlock()ed
>> memory it is visible to shrinker which will do the unnecessary work on
>> memory shortage which in turn will lead to unnecessary page faults.
>>
>> So let me repeat, shouldn't we at least do mapping_set_unevictable() in
>> aio_private_file() ?
>
> Send a patch then! I don't know why you're asking rather than sending a
> patch to do this if you think it is needed.
>
>>>> triggers OOM-killer which kills sshd and other daemons on my machine.
>>>> These pages were not even faulted in (or the shrinker can unmap them),
>>>> the kernel can not know who should be blamed.
>>>
>>> The OOM-killer killed the wrong process: News at 11.
>>
>> Well. I do not think we should blame OOM-killer in this case. But as I
>> said this is not a bug-report or something like this, I agree this is
>> a minor issue.
>
> I do think the OOM-killer is doing the wrong thing here. If process X is
> the only one that is allocating gobs of memory, why kill process Y that
> hasn't allocated memory in minutes or hours just because it is bigger?

I assume, if a process hasn't allocated memory in minutes or hours,
then the most probably all of its evictable memory has already been
moved to swap as its pages were last in lru list.

> We're not perfect at tracking who owns memory allocations, so why not
> factor in memory allocation rate when decided which process to kill? We
> keep throwing bandaids on the OOM-killer by annotating allocations, and we
> keep missing the annotation of allocations. Doesn't sound like a real fix
> for the underlying problem to me when a better heuristic would solve the
> current problem and probably several other future instances of the same
> problem.

Kirill

\
 
 \ /
  Last update: 2017-12-06 19:19    [W:0.066 / U:0.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site