lkml.org 
[lkml]   [2014]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subjectanon_vma accumulating for certain load still not addressed
Hi,
back in 2012 [1] there was a discussion about a forking load which
accumulates anon_vmas. There was a trivial test case which triggers this
and can potentially deplete the memory by local user.

We have a report for an older enterprise distribution where nsd is
suffering from this issue most probably (I haven't debugged it throughly
but accumulating anon_vma structs over time sounds like a good enough
fit) and has to be restarted after some time to release the accumulated
anon_vma objects.

There was a patch which tried to work around the issue [2] but I do not
see any follow ups nor any indication that the issue would be addressed
in other way.

The test program from [1] was running for around 39 mins on my laptop
and here is the result:

$ date +%s; grep anon_vma /proc/slabinfo
1415960225
anon_vma 11664 11900 160 25 1 : tunables 0 0 0 : slabdata 476 476 0

$ ./a # The reproducer

$ date +%s; grep anon_vma /proc/slabinfo
1415962592
anon_vma 34875 34875 160 25 1 : tunables 0 0 0 : slabdata 1395 1395 0

$ killall a
$ date +%s; grep anon_vma /proc/slabinfo
1415962607
anon_vma 11277 12175 160 25 1 : tunables 0 0 0 : slabdata 487 487 0

So we have accumulated 23211 objects over that time period before the
offender was killed which released all of them.

The proposed workaround is kind of ugly but do people have a better idea
than reference counting? If not should we merge it?

---
[1] https://lkml.org/lkml/2012/8/15/765
[2] https://lkml.org/lkml/2013/6/3/568
--
Michal Hocko
SUSE Labs


\
 
 \ /
  Last update: 2014-11-14 14:41    [W:0.052 / U:0.932 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site