lkml.org 
[lkml]   [2004]   [Dec]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Reducing inode cache usage on 2.4?
    Marcelo Tosatti wrote:
    >
    >>Or am I looking in completely the wrong place i.e. the inode cache is
    >>not the problem?
    >
    >
    > No, in your case the extreme inode/dcache sizes indeed seem to be a problem.
    >
    > The default kernel shrinking ratio can be tuned for enhanced reclaim efficiency.
    >
    >
    >>xfs_inode 931428 931428 408 103492 103492 1 : 124 62
    >>dentry_cache 499222 518850 128 17295 17295 1 : 252 126
    >
    >
    > vm_vfs_scan_ratio:
    > ------------------
    > is what proportion of the VFS queues we will scan in one go.
    > A value of 6 for vm_vfs_scan_ratio implies that 1/6th of the
    > unused-inode, dentry and dquot caches will be freed during a
    > normal aging round.
    > Big fileservers (NFS, SMB etc.) probably want to set this
    > value to 3 or 2.
    >
    > The default value is 6.
    > =============================================================
    >
    > Tune /proc/sys/vm/vm_vfs_scan_ratio increasing the value to 10 and so on and
    > examine the results.

    Thanks for the info - but doesn't increasing the value of
    vm_vfs_scan_ratio mean that less of the caches will be freed?

    Doing a few tests (on another test file system with 2 million or so
    files and 1Gb of memory) running 'find $disk -type f', with
    vm_vfs_scan_ratio set to 6 (or 10), the first two column values for
    xfs_inode, linvfs_icache and dentry_cache in /proc/slabinfo reach about
    900000 and stay around that value, but setting vm_vfs_scan_ratio to 1,
    then each value still reaches 900000, but then falls to a few thousand
    and increases up to 900000 and then drop away again and repeats.

    This still happens when I cat many large files (100Mb) to /dev/null at
    the same time as running the find i.e. the inode caches can still reach
    90% of the memory before being reclaimed (with vm_vfs_scan_ratio set to 1).

    If I stop the find process when the inode caches reach about 90% of the
    memory, and then start cat'ing the large files, it appears the inode
    caches are never reclaimed (or longer than it takes to cat 100Gb of data
    to /dev/null) - is this expected behaviour?

    It seems the inode cache has priority over cached file data.

    What triggers the 'normal ageing round'? Is it possible to trigger this
    earlier (at a lower memory usage), or give a higher priority to cached data?

    Thanks

    James Pearson

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:08    [W:0.026 / U:30.172 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site