lkml.org 
[lkml]   [2008]   [Jul]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: x86: Is there still value in having a special tlb flush IPI vector?
Date
On Tuesday 29 July 2008 09:34, Ingo Molnar wrote:
> * Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> > Now that normal smp_function_call is no longer an enormous bottleneck,
> > is there still value in having a specialised IPI vector for tlb
> > flushes? It seems like quite a lot of duplicate code.
> >
> > The 64-bit tlb flush multiplexes the various cpus across 8 vectors to
> > increase scalability. If this is a big issue, then the smp function
> > call code can (and should) do the same thing. (Though looking at it
> > more closely, the way the code uses the 8 vectors is actually a less
> > general way of doing what smp_call_function is doing anyway.)

It definitely is not a clear win. They do not have the same characteristics.
So numbers will be needed.

smp_call_function is now properly scalable in smp_call_function_single
form. The more general case of multiple targets is not so easy and it still
takes a global lock and touches global cachelines.

I don't think it is a good use of time, honestly. Do you have a good reason?


> yep, and we could eliminate the reschedule IPI as well.

No. The rewrite makes it now very good at synchronously sending a function
to a single other CPU.

Sending asynchronously requires a slab allocation and then a remote slab free
(which is nasty for slab) at the other end, and bouncing of locks and
cachelines. No way you want to do that in the reschedule IPI.

Not to mention the minor problem that it still deadlocks when called with
interrupts disabled ;)


\
 
 \ /
  Last update: 2008-07-29 06:55    [W:2.187 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site