lkml.org 
[lkml]   [2011]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] trace: Set oom_score_adj to maximum for ring buffer allocating process
On Thu, 26 May 2011, Vaibhav Nagarnaik wrote:

> > Hmm, have you tried this in practice? Yes we may kill the "echo" command
> > but it doesn't stop the ring buffer from being allocated, and thus
> > killing the echo command may not be enough, and those critical processes
> > that you are trying to protect will be killed next.
> >
>
> Yes I did try this and found that it works as we intend it to. When
> oom-killer is invoked, it picks the process which has lowest
> oom_score_adj and kills it or one of its children.

s/lowest/highest/

> When the process is
> getting killed, any memory allocation from it would be returned -ENOMEM,
> which gets handled in our allocation process and we free up previously
> allocated memory.
>

Not sure that's true, this is allocating with kzalloc_node(GFP_KERNEL),
correct? If current is oom killed, it will have access to all memory
reserves which will increase the liklihood that the allocation will
succeed before handling the SIGKILL.

> This API is now being used in other parts of kernel too, where it knows
> that the allocation could cause OOM.
>

What's wrong with using __GFP_NORETRY to avoid oom killing entirely and
then failing the ring buffer memory allocation? Seems like a better
solution than relying on the oom killer, since there may be other threads
with a max oom_score_adj as well that would appear in the tasklist first
and get killed unnecessarily. Is there some ring buffer code that can't
handle failing allocations appropriately?


\
 
 \ /
  Last update: 2011-05-26 22:37    [W:0.066 / U:36.268 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site