lkml.org 
[lkml]   [2015]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Subject[LKP] [x86_64, entry] 2a23c6b8a9c: +3.5% aim9.creat-clo.ops_per_sec, -64.5% aim9.time.user_time
From
Date
FYI, we noticed the below changes on

commit 2a23c6b8a9c42620182a2d2cfc7c16f6ff8c42b4 ("x86_64, entry: Use sysret to return to userspace when possible")


testbox/testcase/testparams: lkp-wsx02/aim9/performance-300s-creat-clo

b926e6f61a26036e 2a23c6b8a9c42620182a2d2cfc
---------------- --------------------------
%stddev %change %stddev
\ | \
23.75 ± 1% -64.5% 8.44 ± 1% aim9.time.user_time
276 ± 0% +5.5% 291 ± 0% aim9.time.system_time
533594 ± 1% +3.5% 552311 ± 0% aim9.creat-clo.ops_per_sec
1 ± 47% -100.0% 0 ± 0% numa-numastat.node2.other_node
6024 ± 40% -72.3% 1668 ± 48% sched_debug.cpu#10.ttwu_count
6408 ± 45% -71.8% 1806 ± 25% sched_debug.cpu#70.sched_goidle
12980 ± 44% -70.5% 3833 ± 23% sched_debug.cpu#70.nr_switches
6420 ± 47% -76.7% 1494 ± 36% sched_debug.cpu#66.ttwu_count
1328 ± 40% -76.0% 319 ± 16% sched_debug.cfs_rq[18]:/.exec_clock
2329 ± 42% -78.5% 501 ± 37% sched_debug.cpu#10.ttwu_local
815 ± 48% -53.3% 380 ± 43% sched_debug.cfs_rq[30]:/.exec_clock
5427 ± 40% -75.0% 1355 ± 28% sched_debug.cpu#18.ttwu_count
953 ± 45% -62.8% 355 ± 26% sched_debug.cfs_rq[70]:/.exec_clock
1 ± 34% +160.0% 3 ± 33% sched_debug.cpu#56.nr_uninterruptible
63 ± 37% -62.4% 24 ± 23% sched_debug.cfs_rq[3]:/.blocked_load_avg
4838 ± 27% -63.1% 1787 ± 31% sched_debug.cpu#10.sched_goidle
5901 ± 44% -67.6% 1914 ± 25% sched_debug.cpu#66.sched_goidle
4884 ± 28% -64.5% 1733 ± 26% sched_debug.cpu#18.sched_goidle
12006 ± 43% -66.0% 4077 ± 23% sched_debug.cpu#66.nr_switches
770 ± 44% -48.4% 397 ± 42% sched_debug.cfs_rq[6]:/.exec_clock
9861 ± 26% -61.0% 3847 ± 30% sched_debug.cpu#10.nr_switches
9983 ± 28% -62.7% 3723 ± 24% sched_debug.cpu#18.nr_switches
23.75 ± 1% -64.5% 8.44 ± 1% time.user_time
50 ± 45% +143.1% 122 ± 19% sched_debug.cfs_rq[50]:/.blocked_load_avg
55 ± 46% +126.1% 125 ± 19% sched_debug.cfs_rq[50]:/.tg_load_contrib
4591 ± 47% -62.5% 1723 ± 13% sched_debug.cpu#30.sched_goidle
9347 ± 46% -60.5% 3687 ± 12% sched_debug.cpu#30.nr_switches
70 ± 47% +73.9% 121 ± 31% sched_debug.cfs_rq[45]:/.tg_load_contrib
12075 ± 23% -36.3% 7687 ± 4% sched_debug.cpu#70.nr_load_updates
1757 ± 21% -36.0% 1124 ± 27% sched_debug.cfs_rq[63]:/.min_vruntime
16039 ± 36% -39.9% 9638 ± 3% sched_debug.cpu#6.nr_load_updates
56756 ± 4% -35.5% 36623 ± 3% softirqs.RCU
11883 ± 24% -33.7% 7873 ± 4% sched_debug.cpu#66.nr_load_updates
15180 ± 31% -38.8% 9297 ± 3% sched_debug.cpu#14.nr_load_updates
4133 ± 47% -54.2% 1893 ± 31% sched_debug.cpu#2.sched_goidle
8430 ± 46% -49.4% 4265 ± 36% sched_debug.cpu#2.nr_switches
36424 ± 2% +44.3% 52546 ± 3% slabinfo.kmalloc-256.active_objs
36804 ± 2% +43.8% 52915 ± 3% slabinfo.kmalloc-256.num_objs
1149 ± 2% +43.8% 1653 ± 3% slabinfo.kmalloc-256.num_slabs
1149 ± 2% +43.8% 1653 ± 3% slabinfo.kmalloc-256.active_slabs
12722 ± 8% -27.6% 9209 ± 2% sched_debug.cpu#18.nr_load_updates
750 ± 43% +65.8% 1244 ± 16% sched_debug.cpu#38.nr_switches
758 ± 42% +65.0% 1251 ± 15% sched_debug.cpu#38.sched_count
287 ± 40% +70.1% 488 ± 21% sched_debug.cpu#38.sched_goidle
13470 ± 26% -30.7% 9336 ± 6% sched_debug.cpu#22.nr_load_updates
0.00 ± 26% +45.8% 0.00 ± 11% sched_debug.rt_rq[20]:/.rt_time
11704 ± 16% -24.1% 8881 ± 1% sched_debug.cpu#30.nr_load_updates
161 ± 47% +60.4% 258 ± 10% sched_debug.cpu#54.ttwu_local
952 ± 1% +11.9% 1065 ± 5% slabinfo.Acpi-State.num_slabs
952 ± 1% +11.9% 1065 ± 5% slabinfo.Acpi-State.active_slabs
48596 ± 1% +11.9% 54376 ± 5% slabinfo.Acpi-State.num_objs
48595 ± 1% +11.6% 54246 ± 5% slabinfo.Acpi-State.active_objs
475081 ± 4% -10.5% 425176 ± 3% cpuidle.C6-NHM.usage
861 ± 10% +18.9% 1024 ± 2% numa-meminfo.node2.PageTables
2694 ± 8% +13.6% 3059 ± 8% numa-vmstat.node3.nr_slab_reclaimable
10779 ± 8% +13.6% 12240 ± 8% numa-meminfo.node3.SReclaimable
4610 ± 3% -6.6% 4305 ± 5% sched_debug.cfs_rq[64]:/.tg_load_avg
4627 ± 4% -6.6% 4323 ± 4% sched_debug.cfs_rq[63]:/.tg_load_avg
4518 ± 3% -6.1% 4241 ± 5% sched_debug.cfs_rq[70]:/.tg_load_avg
1677 ± 4% -11.9% 1478 ± 2% vmstat.system.cs
1509 ± 1% +2.5% 1546 ± 1% vmstat.system.in

testbox/testcase/testparams: wsm/will-it-scale/performance-unlink2

b926e6f61a26036e 2a23c6b8a9c42620182a2d2cfc
---------------- --------------------------
36.57 ± 0% -39.1% 22.28 ± 1% will-it-scale.time.user_time
192292 ± 1% +3.1% 198236 ± 1% will-it-scale.per_thread_ops
990 ± 0% +1.4% 1004 ± 0% will-it-scale.time.system_time
205532 ± 0% +2.0% 209720 ± 0% will-it-scale.per_process_ops
0.48 ± 0% -1.5% 0.47 ± 0% will-it-scale.scalability
36.57 ± 0% -39.1% 22.28 ± 1% time.user_time
554 ± 37% -36.0% 354 ± 43% sched_debug.cfs_rq[4]:/.tg_load_contrib
583050 ± 15% -20.4% 463937 ± 21% sched_debug.cfs_rq[1]:/.min_vruntime
70523 ± 16% -19.8% 56555 ± 20% sched_debug.cfs_rq[1]:/.exec_clock
63 ± 19% +48.2% 93 ± 8% sched_debug.cfs_rq[5]:/.runnable_load_avg
80 ± 11% -18.0% 66 ± 13% sched_debug.cpu#1.cpu_load[4]
55 ± 4% +26.6% 70 ± 14% sched_debug.cpu#3.cpu_load[4]
82 ± 10% -17.4% 67 ± 14% sched_debug.cpu#1.cpu_load[3]
60 ± 2% +22.1% 73 ± 7% sched_debug.cpu#3.cpu_load[3]
83 ± 10% -16.3% 69 ± 14% sched_debug.cpu#1.cpu_load[2]
90559 ± 8% -17.7% 74537 ± 14% sched_debug.cpu#1.nr_load_updates
67 ± 2% +18.3% 79 ± 8% sched_debug.cpu#5.cpu_load[4]
1.29 ± 7% -6.8% 1.20 ± 7% perf-profile.cpu-cycles.security_inode_init_security.shmem_mknod.shmem_create.vfs_create.do_last
65 ± 2% +15.8% 75 ± 5% sched_debug.cpu#3.cpu_load[2]
68 ± 11% +20.2% 81 ± 4% sched_debug.cpu#7.cpu_load[4]
71 ± 8% +19.0% 84 ± 1% sched_debug.cpu#7.cpu_load[3]
2526 ± 5% -7.0% 2349 ± 5% sched_debug.cpu#8.curr->pid
25711 ± 6% +10.6% 28438 ± 5% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
29189 ± 9% +12.3% 32783 ± 4% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
636 ± 10% +12.3% 714 ± 4% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
75 ± 6% +15.3% 86 ± 2% sched_debug.cpu#7.cpu_load[2]

lkp-wsx02: Westmere-EX
Memory: 128G

wsm: Westmere
Memory: 6G




time.user_time

28 ++---------------------------------------------------------------------+
26 ++ .* |
| * * *. * :*.* .***.* .***. |
24 +*.** + *.* *.***.***.* * .***.**.* .* : * * ** **.* *.**
22 *+ * ** ** * * |
| |
20 ++ |
18 ++ |
16 ++ |
| |
14 ++ |
12 ++ |
| |
10 ++ O OO OOO OOO OOO OO OOO OOO OO OO OOO O |
8 OO-OO---------------------------------O------OO-OO-OOO-O---------------+


aim9.time.user_time

28 ++---------------------------------------------------------------------+
26 ++ .* |
| * * *. * :*.* .***.* .***. |
24 +*.** + *.* *.***.***.* * .***.**.* .* : * * ** **.* *.**
22 *+ * ** ** * * |
| |
20 ++ |
18 ++ |
16 ++ |
| |
14 ++ |
12 ++ |
| |
10 ++ O OO OOO OOO OOO OO OOO OOO OO OO OOO O |
8 OO-OO---------------------------------O------OO-OO-OOO-O---------------+


aim9.time.system_time

294 ++--------------------------------------------------------------------+
292 OO O |
| OOO OOO OOO OO OOO OOO OOO OOO OOO O O OOO OO O |
290 ++ OO OO |
288 ++ |
| |
286 ++ |
284 ++ |
282 ++ |
| |
280 ++ |
278 *+ * *. * |
|*.** + **.**.** .* .***.***.** ** + *. *.* **.***.*|
276 ++ ***.* * ** * *.** **.** :*.* *
274 ++-----------------------------------------*--------------*-----------+


slabinfo.kmalloc-256.active_objs

60000 ++------------------------------------------------------------------+
| |
55000 ++ O O |
O OO OO O O O O O O O OO OO OO |
|O O O O OOO OO O O O O O O O |
50000 ++ O O O O O |
| |
45000 ++ |
| |
40000 ++ |
| * * .* *. * *. |
| **.* *. * ***.***.* **. *.* ***. :* :* :: * *.* :*. * **
35000 **.* ** * :+ * ** * + * *.* * * * * |
| * * |
30000 ++------------------------------------------------------------------+


slabinfo.kmalloc-256.num_objs

60000 ++------------------------------------------------------------------+
| |
55000 ++ O O O |
O OO OO O OO O OO O O O O OO OO OO |
|O O O O O O O O O O O O O O |
50000 ++ O O O |
| |
45000 ++ |
| |
40000 ++ |
| .* .* ** ** * .* *. * *. |
* **.* *.** .*** ** ***. *.* + *. : *.* * :: ***.* :*.** **
35000 +*.* ** * ** ** * * * |
| |
30000 ++------------------------------------------------------------------+


slabinfo.kmalloc-256.active_slabs

1800 ++-------------------------------------------------------------------+
| O O O |
1700 O+ OO OO O O O O O O OOO O O |
1600 +O O OOO O O O O OO O O O |
| O O O O O O O O |
1500 ++ |
| |
1400 ++ |
| |
1300 ++ |
1200 ++ *. * * |
| * * *. **. * * **.* : * + **. **. :*. *.*|
1100 *+ :*.** .** + * * * *.** .* *.* : : ** * ***.* ** *
|*.* * * * * * |
1000 ++-------------------------------------------------------------------+


slabinfo.kmalloc-256.num_slabs

1800 ++-------------------------------------------------------------------+
| O O O |
1700 O+ OO OO O O O O O O OOO O O |
1600 +O O OOO O O O O OO O O O |
| O O O O O O O O |
1500 ++ |
| |
1400 ++ |
| |
1300 ++ |
1200 ++ *. * * |
| * * *. **. * * **.* : * + **. **. :*. *.*|
1100 *+ :*.** .** + * * * *.** .* *.* : : ** * ***.* ** *
|*.* * * * * * |
1000 ++-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Huang, Ying

---
testcase: aim9
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: d64557360f2f4a478e99ae67e83fe5f79dfee036
model: Westmere-EX
memory: 128G
nr_cpu: 80
nr_hdd_partitions: 0
hdd_partitions:
swap_partitions:
rootfs_partition:
rootfs: debian-x86_64-2015-02-07.cgz
aim9:
testtime: 300s
test: creat-clo
testbox: lkp-wsx02
tbox_group: lkp-wsx02
kconfig: x86_64-rhel
enqueue_time: 2015-02-12 07:11:40.719413274 +08:00
head_commit: d64557360f2f4a478e99ae67e83fe5f79dfee036
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: linux-devel/devel-hourly-2015021221
kernel: "/kernel/x86_64-rhel/d64557360f2f4a478e99ae67e83fe5f79dfee036/vmlinuz-3.19.0-wl-ath-gd645573"
user: lkp
queue: cyclic
result_root: "/result/lkp-wsx02/aim9/performance-300s-creat-clo/debian-x86_64-2015-02-07.cgz/x86_64-rhel/d64557360f2f4a478e99ae67e83fe5f79dfee036/0"
job_file: "/lkp/scheduled/lkp-wsx02/cyclic_aim9-performance-300s-creat-clo-debian-x86_64.cgz-x86_64-rhel-HEAD-d64557360f2f4a478e99ae67e83fe5f79dfee036-0-20150212-88039-vh9q2c.yaml"
dequeue_time: 2015-02-13 02:28:39.980805507 +08:00
job_state: finished
loadavg: 0.88 1.58 0.92 1/649 11590
start_time: '1423765800'
end_time: '1423766100'
version: "/lkp/lkp/.src-20150212-220000"
_______________________________________________
LKP mailing list
LKP@linux.intel.com
\
 
 \ /
  Last update: 2015-02-15 09:21    [W:0.042 / U:1.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site