lkml.org 
[lkml]   [2004]   [Feb]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Limit hash table size
On Thu, Feb 05, 2004 at 07:09:04PM -0800, Andrew Morton wrote:
> Andi Kleen <ak@suse.de> wrote:
> >
> > Andrew Morton <akpm@osdl.org> writes:
> >
> > > Ken, I remain unhappy with this patch. If a big box has 500 million
> > > dentries or inodes in cache (is possible), those hash chains will be more
> > > than 200 entries long on average. It will be very slow.
> >
> > How about limiting the global size of the dcache in this case ?
>
> But to what size?
>
> The thing is, any workload which touches a huge number of dentries/inodes
> will, if it touches them again, touch them again in exactly the same order.
> This triggers the worst-case LRU behaviour.
>
> So if you limit dcache to 100MB and you happen to have a workload which
> touches 101MB's worth, you get a 100% miss rate. You suffer a 100000%
> slowdown on the second pass, which is unhappy. It doesn't seem worth
> crippling such workloads just because of the updatedb thing.

A less strict approach to LRU might serve. Probabilistically dropping
something in the first half of the LRU rather than the head would go a
long way to gracefully degrading the "working set slightly larger than
cache" effect. There are a couple different ways to do this that are
reasonably efficient.

--
Matt Mackall : http://www.selenic.com : Linux development and consulting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:00    [W:0.192 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site