lkml.org 
[lkml]   [2004]   [May]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: RCU scaling on large systems
On Fri, May 07, 2004 at 04:32:35PM -0700, Andrew Morton wrote:
> Jack Steiner <steiner@sgi.com> wrote:
> >
> > The calls to RCU are coming from here:
> >
> > [11]kdb> bt
> > Stack traceback for pid 3553
> > 0xe00002b007230000 3553 3139 1 11 R 0xe00002b0072304f0 *ls
> > 0xa0000001000feee0 call_rcu
> > 0xa0000001001a3b20 d_free+0x80
> > 0xa0000001001a3ec0 dput+0x340
> > 0xa00000010016bcd0 __fput+0x210
> > 0xa00000010016baa0 fput+0x40
> > 0xa000000100168760 filp_close+0xc0
> > 0xa000000100168960 sys_close+0x180
> > 0xa000000100011be0 ia64_ret_from_syscall
> >
> > I see this same backtrace from numerous processes.
>
> eh? Why is dput freeing the dentry? It should just be leaving it in cache.
>
> What filesystem is being used? procfs?

deleting entries from dcache can be a frequent operation, even rename()
triggers d_free.

note that I changed my tree to free all negative entries that are
currently generated by unlink. I find useless to leave negative dentries
after "unlink". I leave them of course after a failed lookup (that's the
fundamental usage of the negative dentries for the PATHs userspace
lookups), but not after unlink. I think it's wasted memory that would
better be used for other purposes. I think this is also the reason when
dcache-RCU was once benchmarked on top of my 2.4 tree it resulted in a
loss of performance.

in my 2.6-aa I didn't forward port all my 2.4-aa stuff but I really
would like to avoid wasting memory there again for the files that are
being deleted, plus during memory pressure (that could be generated even
from pagecache) dcache must be freed up (amittedly there we could free
more than 1 entry for every quiescent point).

I suggested a few years ago that I agreed completely with RCU _only_ for
usages where the reader is extremely _frequent_ and the writer is
_extremely_ unlikely to happen, my most obvious example was the
replacement of the big reader lock.

RCU basically trades mugh higher performance for reader, with much lower
performance for the writer. RCU is like a rwlock with the read-lock being
extremely fast (actually it's literally a noop), but with a very slow
writer (but the writer is still better than the big writer lock,
especially the writer has no starvation and can coalesce things together
to maximize icache etc..). The more cpus, the higher performance you get
by RCU by having a noop read-lock operation. The more cpus the lower
performance you get in the writer. Very few things comes for free ;).
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:03    [W:0.415 / U:0.236 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site