lkml.org 
[lkml]   [2009]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [benchmark] 1% performance overhead of paravirt_ops on native kernels
On Tue, Jun 09, 2009 at 09:00:08AM -0700, Linus Torvalds wrote:
>
>
> On Tue, 9 Jun 2009, Nick Piggin wrote:
> >
> > If it's such a problem, it could be made a lot faster without too
> > much problem. You could just introduce a FIFO of ptes behind it
> > and flush them all in one go. 4K worth of ptes per CPU might
> > hopefully bring your overhead down to < 1%.
>
> We already have that. The regular kmap() does that. It's just not usable
> in atomic context.

Well this would be more like the kmap cache idea rather than the
kmap_atomic FIFO (which would remain per-cpu and look much like
the existing kmap_atomic).


> We'd need to fix the locking: right now kmap_high() uses non-irq-safe
> locks, and it does that whole cross-cpu flushing thing (which is why
> those locks _have_ to be non-irq-safe.
>
> The way to fix that, though, would be to never do any cross-cpu calls, and
> instead just have a cpumask saying "you need to flush before you do
> anything with kmap". So you'd just set that cpumask inside the lock, and
> if/when some other CPU does a kmap, they'd flush their local TLB at _that_
> point instead of having to have an IPI call.

The idea seems nice but isn't the problem that kmap gives back a
basically 1st class kernel virtual memory? (ie. it can then be used
by any other CPU at any point without it having to use kmap?).



\
 
 \ /
  Last update: 2009-06-09 18:23    [W:0.113 / U:0.428 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site