lkml.org 
[lkml]   [2004]   [Dec]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: Reducing inode cache usage on 2.4?
I've tested the patch on my test setup - running a 'find $disk -type f' 
and a cat of large files to /dev/null at the same time does indeed
reduce the size of the inode and dentry caches considerably - the first
column numbers for fs_inode, linvfs_icache and dentry_cache in
/proc/slabinfo hover at about 400-600 (over 900000 previously).

However, is this going a bit to far the other way? When I boot the
machine with 4Gb RAM, the inode and dentry caches are squeezed to the
same amounts, but it may be the case that it would be more beneficial to
have more in the inode and dentry caches? i.e. I guess some sort of
tunable factor that limits the minimum size of the inode and dentry
caches in this case?

But saying that, I notice my 'find $disk -type f' (with about 2 million
files) runs a lot faster with the smaller inode/dentry caches - about 1
or 2 minutes with the patched kernel compared with about 5 to 7 minutes
with the unpatched kernel - I guess it was taking longer to search the
inode/dentry cache than reading direct from disk.

James Pearson

Marcelo Tosatti wrote:
> James,
>
> Can apply Andrew's patch and examine the results?
>
> I've merged it to mainline because it looks sensible.
>
> Thanks Andrew!
>
> On Fri, Dec 17, 2004 at 05:21:04PM -0800, Andrew Morton wrote:
>
>>James Pearson <james-p@moving-picture.com> wrote:
>>
>>>It seems the inode cache has priority over cached file data.
>>
>>It does. If the machine is full of unmapped clean pagecache pages the
>>kernel won't even try to reclaim inodes. This should help a bit:
>>
>>--- 24/mm/vmscan.c~a 2004-12-17 17:18:31.660254712 -0800
>>+++ 24-akpm/mm/vmscan.c 2004-12-17 17:18:41.821709936 -0800
>>@@ -659,13 +659,13 @@ int fastcall try_to_free_pages_zone(zone
>>
>> do {
>> nr_pages = shrink_caches(classzone, gfp_mask, nr_pages, &failed_swapout);
>>- if (nr_pages <= 0)
>>- return 1;
>> shrink_dcache_memory(vm_vfs_scan_ratio, gfp_mask);
>> shrink_icache_memory(vm_vfs_scan_ratio, gfp_mask);
>> #ifdef CONFIG_QUOTA
>> shrink_dqcache_memory(vm_vfs_scan_ratio, gfp_mask);
>> #endif
>>+ if (nr_pages <= 0)
>>+ return 1;
>> if (!failed_swapout)
>> failed_swapout = !swap_out(classzone);
>> } while (--tries);
>>_
>>
>>
>>
>>> What triggers the 'normal ageing round'? Is it possible to trigger this
>>> earlier (at a lower memory usage), or give a higher priority to cached data?
>>
>>You could also try lowering /proc/sys/vm/vm_mapped_ratio. That will cause
>>inodes to be reaped more easily, but will also cause more swapout.
>
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:08    [W:0.083 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site