lkml.org 
[lkml]   [2017]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm, oom: allow oom reaper to race with exit_mmap
On Thu 27-07-17 13:59:09, Manish Jaggi wrote:
[...]
> With 4.11.6 I was getting random kernel panics (Out of memory - No process left to kill),
> when running LTP oom01 /oom02 ltp tests on our arm64 hardware with ~256G memory and high core count.
> The issue experienced was as follows
> that either test (oom01/oom02) selected a pid as victim and waited for the pid to be killed.
> that pid was marked as killed but somewhere there is a race and the process didnt get killed.
> and the oom01/oom02 test started killing further processes, till it panics.

>
> IIUC this issue is quite similar to your patch description. But applying your patch I still see the issue.
> If it is not related to this patch, can you please suggest by looking at the log, what could be preventing
> the killing of victim.
>
> Log (https://pastebin.com/hg5iXRj2)
>
> As a subtest of oom02 starts, it prints out the victim - In this case 4578
>
> oom02 0 TINFO : start OOM testing for mlocked pages.
> oom02 0 TINFO : expected victim is 4578.
>
> When oom02 thread invokes oom-killer, it did select 4578 for killing...

I will definitely have a look. Can you report it in a separate email
thread please? Are you able to reproduce with the current Linus or
linux-next trees?
>
>
> [ 364.737486] oom02:4583 invoked oom-killer: gfp_mask=0x16080c0(GFP_KERNEL|__GFP_ZERO|__GFP_NOTRACK), nodemask=1, order=0, oom_score_adj=0
> [...] snip
> [ 365.036127] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
> [ 365.044691] [ 1905] 0 1905 3236 1714 10 4 0 0 systemd-journal
> [...] snip
> [ 365.222325] [ 4491] 0 4491 27965 1022 8 3 0 0 bash
> [ 365.230849] [ 4513] 0 4513 670 365 5 3 0 0 oom02
> [ 365.239459] [ 4578] 0 4578 37776030 32890957 64257 138 0 0 oom02
> [ 365.248067] Out of memory: Kill process 4578 (oom02) score 952 or sacrifice child
> [ 365.255581] Killed process 4578 (oom02) total-vm:151104120kB, anon-rss:131562528kB, file-rss:1300kB, shmem-rss:0kB
> [ 365.266829] out_of_memory: Current (4583) has a pending SIGKILL
> [ 365.267347] oom_reaper: reaped process 4578 (oom02), now anon-rss:131559616kB, file-rss:0kB, shmem-rss:0kB
> [ 365.282658] oom_reaper: reaped process 4583 (oom02), now anon-rss:131561664kB, file-rss:0kB, shmem-rss:0kB
>
> ==> At this point, the test should have completed with a TPASS or TFAIL, but it didnt and it continues invoking oom-killer again.
>
> [ 365.283361] oom02:4586 invoked oom-killer: gfp_mask=0x16040c0(GFP_KERNEL|__GFP_COMP|__GFP_NOTRACK), nodemask=1, order=0, oom_score_adj=0

Yes because
[ 365.283499] Node 1 Normal free:19500kB min:33804kB low:165916kB high:298028kB active_anon:13312kB inactive_anon:172kB active_file:0kB inactive_file:1044kB unevictable:131560064kB writepending:0kB present:134213632kB managed:132113248kB mlocked:131560064kB slab_reclaimable:5748kB slab_unreclaimable:17808kB kernel_stack:2720kB pagetables:254636kB bounce:0kB free_pcp:10476kB local_pcp:144kB free_cma:0kB

Although we have killed and reaped oom02 process Node1 is still below
min watermark and that is why we have hit the oom killer again. It
is not immediatelly clear to me why, that would require a deeper
inspection.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-07-27 11:24    [W:0.645 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site