lkml.org 
[lkml]   [2004]   [Feb]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.4.25 - large inode_cache
On Thu, Feb 26, 2004 at 11:23:46AM -0300, Marcelo Tosatti wrote:
...
> > Will a heap of busy knfsd processes doing reads or writes exert
> > pressure? Or is it only local userspace that can pressurize the VM (by
> > either anonymously backed memory or file I/O).
>
> Any allocator will cause VM pressure.

And I suppose that a busy knfsd qualifies as an "allocator" :)

...
> > Any enlightenment or suggestions are greatly appreciated :)
>
> What you can try is to increase the VM tunable vm_vfs_scan_ratio. This is
> the proportion of VFS unused d/i caches that will try to be in one VM
> freeing pass. The default is 6. Try 4 or 3.
>
> /proc/sys/vm/vm_vfs_scan_ratio

Done! Set to 3 now - I will let the box run with this setting until
tomorrow, and report back how things look.

> You can also play with
>
> /proc/sys/vm/vm_cache_scan_ratio (which is the percentage of cache which
> will be scanned in one go).

I'm leaving this one be for now (one variable at a time). But let's see
what tomorrow brings.

Judging from the code, it seems that it's the vm_vfs_scan_ratio that
directly affects the icache/dcache and dquot - but I'm sure that there
are subtle interactions far beyond what I can possibly hope to
comprehend ;)

Thanks a lot for your suggestions Marcelo!

/ jakob


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:01    [W:0.067 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site