lkml.org 
[lkml]   [2010]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch]x86: spread tlb flush vector between nodes
From
Date
On Wed, 2010-10-13 at 16:39 +0800, Shaohua Li wrote:
> On Wed, 2010-10-13 at 16:16 +0800, Andi Kleen wrote:
> > On Wed, Oct 13, 2010 at 03:41:38PM +0800, Shaohua Li wrote:
> >
> > Hi Shaohua,
> >
> > > Currently flush tlb vector allocation is based on below equation:
> > > sender = smp_processor_id() % 8
> > > This isn't optimal, CPUs from different node can have the same vector, this
> > > causes a lot of lock contention. Instead, we can assign the same vectors to
> > > CPUs from the same node, while different node has different vectors. This has
> > > below advantages:
> > > a. if there is lock contention, the lock contention is between CPUs from one
> > > node. This should be much cheaper than the contention between nodes.
> > > b. completely avoid lock contention between nodes. This especially benefits
> > > kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
> > > to specific node.
> >
> > The original scheme with 8 vectors was designed when Linux didn't have
> > per CPU interrupt numbers yet, and interrupts vectors were a scarce resource.
> >
> > Now that we have per CPU interrupts and there is no immediate danger
> > of running out I think it's better to use more than 8 vectors for the TLB
> > flushes.
> >
> > Perhaps could use 32 vectors or so and give each node on a 8S
> > system 4 slots and on a 4 node system 8 slots?
> Haven't too much idea. Before we have per CPU interrupts, muti vector
> msi-x isn't widely deployed. Thought we need data if this is really
> required.
looks there are still some overhead with total 8 vectors in a big
machine. I'll try the 32 vectors as you suggested. I'll send separate
patches out to address the 32 vectors issue. Can we merge this patch
first?

Thanks,
Shaohua



\
 
 \ /
  Last update: 2010-10-19 07:41    [W:0.068 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site