lkml.org 
[lkml]   [2011]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.6.39-rc4+: oom-killer busy killing tasks
On Mon, Apr 25, 2011 at 12:19:24AM -0700, Christian Kujau wrote:
> On Mon, 25 Apr 2011 at 09:46, Dave Chinner wrote:
> > I'd say they are not being reclaimmmed because the VFS hasn't let go
> > of them yet. Can you also dump /proc/sys/fs/{dentry,inode}-state so
> > we can see if the VFS has released the inodes such that they can be
> > reclaimed by XFS?
>
> Please see http://nerdbynature.de/bits/2.6.39-rc4/oom/
>
> - slabinfo-4.txt.bz2, contains /proc/sys/fs/{dentry,inode}-state and

Ok, so looking at slabinfo-5.txt.bz2, this is the pattern of dentry,
VFS inode and XFS inode use patterns:

http://userweb.kernel.org/~dgc/slab-usage.png

What this shows is that VFS inode cache memory usage increases until
about the 550 sample mark before the VM starts to reclaim it with
extreme prejudice. At that point, I'd expect the XFS inode cache to
then shrink, and it doesn't. I've got no idea why the either the
shrinker or background reclaim is not reclaiming and freeing inodes,
but it is the reason why the system OOMs.

Can you check if there are any blocked tasks nearing OOM (i.e. "echo
w > /proc/sysrq-trigger") so we can see if XFS inode reclaim is
stuck somewhere?

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com


\
 
 \ /
  Last update: 2011-04-27 04:29    [W:0.148 / U:0.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site