lkml.org 
[lkml]   [2001]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Locking comment on shrink_caches()
On Tue, Sep 25, 2001 at 10:31:32PM -0700, Andrew Morton wrote:
> Dipankar Sarma wrote:
> >
> > John Hawkes from SGI had published some AIM7 numbers that showed
> > pagecache_lock to be a bottleneck above 4 processors. At 32 processors,
> > half the CPU cycles were spent on waiting for pagecache_lock. The
> > thread is at -
> >
> > http://marc.theaimsgroup.com/?l=lse-tech&m=98459051027582&w=2
> >
>
> That's NUMA hardware. The per-hashqueue locking change made
> a big improvement on that hardware. But when it was used on
> Intel hardware it made no measurable difference at all.
>
> Sorry, but the patch adds compexity and unless a significant
> throughput benefit can be demonstrated on less exotic hardware,
> why use it?

I agree that on NUMA systems, contention and lock wait times
degenerate non-linearly thereby skewing the actual impact.

IIRC, there were discussions on lse-tech about pagecache_lock and
dbench numbers published by Juergen Doelle (on 8way Intel) and
Anton Blanchard on 16way PPC. Perhaps they can shed some light on this.

Thanks
Dipankar
--
Dipankar Sarma <dipankar@in.ibm.com> Project: http://lse.sourceforge.net
Linux Technology Center, IBM Software Lab, Bangalore, India.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:03    [W:0.047 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site