lkml.org 
[lkml]   [2016]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 3/3] x86/mm: If INVPCID is available, use it to flush global mappings
On Fri, Jan 29, 2016 at 6:26 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Mon, Jan 25, 2016 at 10:37:44AM -0800, Andy Lutomirski wrote:
>> On my Skylake laptop, INVPCID function 2 (flush absolutely
>> everything) takes about 376ns, whereas saving flags, twiddling
>> CR4.PGE to flush global mappings, and restoring flags takes about
>> 539ns.
>
> FWIW, I ran your microbenchmark on the IVB laptop I have here 3 times
> and some of the numbers from each run are pretty unstable. Not that it
> means a whole lot - the thing doesn't have INVPCID support.
>
> I'm just questioning the microbenchmark and whether we should be rather
> doing those measurements with a real benchmark, whatever that means. My
> limited experience says that measuring TLB performance is hard.
>
> ./context_switch_latency 0 thread same
> use_xstate = 0
> Using threads
> 1: 100000 iters at 2676.2 ns/switch
> 2: 100000 iters at 2700.2 ns/switch
> 3: 100000 iters at 2656.1 ns/switch
>
> ./context_switch_latency 0 thread different
> use_xstate = 0
> Using threads
> 1: 100000 iters at 5174.8 ns/switch
> 2: 100000 iters at 5140.5 ns/switch
> 3: 100000 iters at 5292.9 ns/switch
>
> ./context_switch_latency 0 process same
> use_xstate = 0
> Using a subprocess
> 1: 100000 iters at 2361.2 ns/switch
> 2: 100000 iters at 2332.2 ns/switch
> 3: 100000 iters at 3436.9 ns/switch
>
> ./context_switch_latency 0 process different
> use_xstate = 0
> Using a subprocess
> 1: 100000 iters at 4713.6 ns/switch
> 2: 100000 iters at 4957.5 ns/switch
> 3: 100000 iters at 5012.2 ns/switch
>
> ./context_switch_latency 1 thread same
> use_xstate = 1
> Using threads
> 1: 100000 iters at 2505.6 ns/switch
> 2: 100000 iters at 2483.1 ns/switch
> 3: 100000 iters at 2479.7 ns/switch
>
> ./context_switch_latency 1 thread different
> use_xstate = 1
> Using threads
> 1: 100000 iters at 5245.9 ns/switch
> 2: 100000 iters at 5241.1 ns/switch
> 3: 100000 iters at 5220.3 ns/switch
>
> ./context_switch_latency 1 process same
> use_xstate = 1
> Using a subprocess
> 1: 100000 iters at 2329.8 ns/switch
> 2: 100000 iters at 2350.2 ns/switch
> 3: 100000 iters at 2500.9 ns/switch
>
> ./context_switch_latency 1 process different
> use_xstate = 1
> Using a subprocess
> 1: 100000 iters at 4970.7 ns/switch
> 2: 100000 iters at 5034.0 ns/switch
> 3: 100000 iters at 4991.6 ns/switch
>

I'll fiddle with that benchmark a little bit. Maybe I can make it
suck less. If anyone knows a good non-micro benchmark for this, let
me know. I refuse to use dbus as my benchmark :)

FWIW, I benchmarked cr4 vs invpcid by adding a prctl and calling it in
a loop. If Ingo's fpu benchmark thing ever lands, I'll gladly send a
patch to add TLB flushes to it.

--Andy

\
 
 \ /
  Last update: 2016-01-29 19:01    [W:0.053 / U:0.760 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site