lkml.org 
[lkml]   [2018]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks
From
Date
On 2018/11/06 21:42, Michal Hocko wrote:
> On Tue 06-11-18 18:44:43, Tetsuo Handa wrote:
> [...]
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index 6e1469b..a97648a 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -1382,8 +1382,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
>> };
>> bool ret;
>>
>> - mutex_lock(&oom_lock);
>> - ret = out_of_memory(&oc);
>> + if (mutex_lock_killable(&oom_lock))
>> + return true;
>> + /*
>> + * A few threads which were not waiting at mutex_lock_killable() can
>> + * fail to bail out. Therefore, check again after holding oom_lock.
>> + */
>> + ret = fatal_signal_pending(current) || out_of_memory(&oc);
>> mutex_unlock(&oom_lock);
>> return ret;
>> }
>
> If we are goging with a memcg specific thingy then I really prefer
> tsk_is_oom_victim approach. Or is there any reason why this is not
> suitable?
>

Why need to wait for mark_oom_victim() called after slow printk() messages?

If current thread got Ctrl-C and thus current thread can terminate, what is
nice with waiting for the OOM killer? If there are several OOM events in
multiple memcg domains waiting for completion of printk() messages? I don't
see points with waiting for oom_lock, for try_charge() already allows current
thread to terminate due to fatal_signal_pending() test.

\
 
 \ /
  Last update: 2018-11-07 10:46    [W:0.083 / U:0.376 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site