lkml.org 
[lkml]   [2004]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 21 million inodes is causing severe pauses.
Robin Holt <holt@sgi.com> wrote:
>
> On Mon, Nov 15, 2004 at 02:57:14PM -0800, Andrew Morton wrote:
> > Robin Holt <holt@sgi.com> wrote:
> > >
> > > One significant problem we are running into is autofs trying to umount the
> > > file systems. This results in the umount grabbing the BKL and inode_lock,
> > > holding it while it scans through the inode_list and others looking for
> > > inodes used by this super block and attempting to free them.
> >
> > You'll need invalidate_inodes-speedup.patch and
> > break-latency-in-invalidate_list.patch (or an equivalent).
>
> With these patches and a new test where I periodically put on mild
> but diminishing memory pressure, I have been able to get the number
> of inodes up to 31 Million. I would really like to find a way to
> reduce limit the number of inodes or am I seeing a problem where
> none exists? After putting on constant mild memory pressure, I have
> seen then number of inodes stabilize at 17-18 Million.

There shouldn't be anything wrong with that per-se. You have a big system,
and a workload which touches a lot of inodes. It would be best to fix up
the lock hold times rather than reducing the cached inode count because the
lock hold times are sucky.

That being said, you may get some joy by greatly increasing
/proc/sys/vm/vfs_cache_pressure. Try 10000.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:08    [W:0.060 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site