lkml.org 
[lkml]   [2011]   [May]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 3/5] oom: oom-killer don't use proportion of system-ram internally
(2011/05/24 7:28), David Rientjes wrote:
> On Fri, 20 May 2011, KOSAKI Motohiro wrote:
>
>> CAI Qian reported his kernel did hang-up if he ran fork intensive
>> workload and then invoke oom-killer.
>>
>> The problem is, current oom calculation uses 0-1000 normalized value
>> (The unit is a permillage of system-ram). Its low precision make
>> a lot of same oom score. IOW, in his case, all processes have smaller
>> oom score than 1 and internal calculation round it to 1.
>>
>> Thus oom-killer kill ineligible process. This regression is caused by
>> commit a63d83f427 (oom: badness heuristic rewrite).
>>
>> The solution is, the internal calculation just use number of pages
>> instead of permillage of system-ram. And convert it to permillage
>> value at displaying time.
>>
>> This patch doesn't change any ABI (included /proc/<pid>/oom_score_adj)
>> even though current logic has a lot of my dislike thing.
>>
>
> Same response as when you initially proposed this patch:
> http://marc.info/?l=linux-kernel&m=130507086613317 -- you never replied to
> that.

I did replay. Why don't you read?
http://www.gossamer-threads.com/lists/linux/kernel/1378837#1378837

If you haven't understand the issue, you can apply following patch and
run it.


diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index b01fa64..f35909b 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -718,6 +718,9 @@ void out_of_memory(struct zonelist *zonelist, gfp_t gfp_mask,
*/
constraint = constrained_alloc(zonelist, gfp_mask, nodemask,
&totalpages);
+
+ totalpages *= 10;
+
mpol_mask = (constraint == CONSTRAINT_MEMORY_POLICY) ? nodemask : NULL;
check_panic_on_oom(constraint, gfp_mask, order, mpol_mask);


> The changelog doesn't accurately represent CAI Qian's problem; the issue
> is that root processes are given too large of a bonus in comparison to
> other threads that are using at most 1.9% of available memory. That can
> be fixed, as I suggested by giving 1% bonus per 10% of memory used so that
> the process would have to be using 10% before it even receives a bonus.
>
> I already suggested an alternative patch to CAI Qian to greatly increase
> the granularity of the oom score from a range of 0-1000 to 0-10000 to
> differentiate between tasks within 0.01% of available memory (16MB on CAI
> Qian's 16GB system). I'll propose this officially in a separate email.
>
> This patch also includes undocumented changes such as changing the bonus
> given to root processes.






\
 
 \ /
  Last update: 2011-05-24 04:11    [W:1.068 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site