lkml.org 
[lkml]   [2017]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From: Yang Zhang <yang.zhang.wz@gmail.com>

Some latency-intensive workload will see obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost i have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we
don't need to goes through the heavy overhead path.

Here is the data i get when running benchmark contextswitch
(https://github.com/tsuna/contextswitch)
before patch:
2000000 process context switches in 4822613801ns (2411.3ns/ctxsw)
after patch:
2000000 process context switches in 3584098241ns (1792.0ns/ctxsw)


Yang Zhang (2):
x86/idle: add halt poll for halt idle
x86/idle: use dynamic halt poll

Documentation/sysctl/kernel.txt | 24 ++++++++++
arch/x86/include/asm/processor.h | 6 +++
arch/x86/kernel/apic/apic.c | 6 +++
arch/x86/kernel/apic/vector.c | 1 +
arch/x86/kernel/cpu/mcheck/mce_amd.c | 2 +
arch/x86/kernel/cpu/mcheck/therm_throt.c | 2 +
arch/x86/kernel/cpu/mcheck/threshold.c | 2 +
arch/x86/kernel/irq.c | 5 ++
arch/x86/kernel/irq_work.c | 2 +
arch/x86/kernel/process.c | 80 ++++++++++++++++++++++++++++++++
arch/x86/kernel/smp.c | 6 +++
include/linux/kernel.h | 5 ++
kernel/sched/idle.c | 3 ++
kernel/sysctl.c | 23 +++++++++
14 files changed, 167 insertions(+)

--
1.8.3.1

\
 
 \ /
  Last update: 2017-06-22 20:02    [W:0.131 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site