lkml.org 
[lkml]   [2002]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.5.33-mm1
William Lee Irwin III wrote:
>> It also looks like there's either a bit of internal fragmentation or a
>> missing kmem_cache_reap() somewhere:
>> ext3_inode_cache: 20001KB 51317KB 38.97
>> dentry_cache: 4734KB 18551KB 25.52
>> radix_tree_node: 1811KB 1923KB 94.20
>> buffer_head: 1132KB 1378KB 82.12

On Tue, Sep 03, 2002 at 06:13:17PM -0700, Andrew Morton wrote:
> That's really outside the control of slablru. It's determined
> by the cache-specific LRU algorithms, and the allocation order.
> You'll need to look at the second-last and third-last columns in
> /proc/slabinfo (boy I wish that thing had a heading line, or a nice
> program to interpret it):
> ext3_inode_cache 959 2430 448 264 270 1
> That's 264 pages in use, 270 total. If there's a persistent gap between
> these then there is a problem - could well be that slablru is not locating
> the pages which were liberated by the pruning sufficiently quickly.
> Calling kmem_cache_reap() after running the pruners will fix that up.

# grep ext3_inode_cache /proc/slabinfo
ext3_inode_cache 18917 87012 448 7686 9668 1
...
ext3_inode_cache: 8098KB 38052KB 21.28

Looks like a persistent gap from here.


Cheers,
Bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:28    [W:0.083 / U:1.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site