lkml.org 
[lkml]   [2016]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[lkp] [mm, oom] faad2185f4: vm-scalability.throughput -11.8% regression
FYI, we noticed vm-scalability.throughput -11.8% regression with the following commit:

https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit faad2185f482578d50d363746006a1b95dde9d0a ("mm, oom: rework oom detection")

on test machine: lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory


=========================================================================================
compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/rootfs/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-4.9/performance/x86_64-rhel-pmem/1/16/debian-x86_64-2015-02-07.cgz/lkp-hsw-ep2/swap-w-rand/vm-scalability/never/never

commit:
0da9597ac9c0adb8a521b9935fbe43d8b0e8cc64
faad2185f482578d50d363746006a1b95dde9d0a

0da9597ac9c0adb8 faad2185f482578d50d3637460
---------------- --------------------------
fail:runs %reproduction fail:runs
%stddev %change %stddev
\ | \
43802 ± 0% -11.8% 38653 ± 0% vm-scalability.throughput
310.35 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.elapsed_time
310.35 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.elapsed_time.max
234551 ± 6% -100.0% 0.00 ± -1% vm-scalability.time.involuntary_context_switches
44654748 ± 9% -100.0% 0.00 ± -1% vm-scalability.time.major_page_faults
2442686 ± 11% -100.0% 0.00 ± -1% vm-scalability.time.maximum_resident_set_size
34477365 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.page_size
1595 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.percent_of_cpu_this_job_got
4935 ± 0% -100.0% 0.00 ± -1% vm-scalability.time.system_time
19.08 ± 6% -100.0% 0.00 ± -1% vm-scalability.time.user_time
342.89 ± 0% -71.7% 96.99 ± -1% uptime.boot
18719 ± 1% -70.3% 5555 ± 0% uptime.idle
227271 ± 3% -68.0% 72623 ± 0% softirqs.RCU
208173 ± 7% -69.7% 63118 ± 0% softirqs.SCHED
3204631 ± 1% -73.0% 866292 ± 0% softirqs.TIMER
739.50 ± 0% -1.6% 728.00 ± 0% turbostat.Avg_MHz
61.50 ± 3% +20.3% 74.00 ± -1% turbostat.CoreTmp
0.07 ± 57% +1092.7% 0.82 ±-121% turbostat.Pkg%pc2
64.75 ± 2% +14.3% 74.00 ± -1% turbostat.PkgTmp
51.45 ± 0% +1.8% 52.39 ± -1% turbostat.RAMWatt
789322 ± 4% +49.2% 1177649 ± 0% vmstat.memory.free
53141272 ± 1% -45.8% 28781900 ± 0% vmstat.memory.swpd
0.00 ± 0% +Inf% 1.00 ±-100% vmstat.procs.b
780938 ± 7% +66.2% 1297589 ± 0% vmstat.swap.so
4217 ± 6% +103.4% 8576 ± 0% vmstat.system.cs
204460 ± 6% +62.0% 331270 ± 0% vmstat.system.in
9128034 ± 43% -85.7% 1306182 ± 0% cpuidle.C1E-HSW.time
5009 ± 52% -88.9% 557.00 ± 0% cpuidle.C1E-HSW.usage
9110 ±130% -93.3% 611.00 ± 0% cpuidle.C3-HSW.usage
1.655e+10 ± 0% -79.5% 3.397e+09 ± 0% cpuidle.C6-HSW.time
621881 ± 2% -71.5% 177398 ± 0% cpuidle.C6-HSW.usage
53981965 ± 58% -80.4% 10553789 ± 0% cpuidle.POLL.time
85773 ± 9% -18.4% 69982 ± 0% cpuidle.POLL.usage
2925199 ± 94% -75.8% 706866 ± 0% numa-numastat.node0.local_node
2931002 ± 93% -75.6% 716120 ± 0% numa-numastat.node0.numa_hit
12041792 ± 24% -67.4% 3919657 ± 0% numa-numastat.node0.numa_miss
12047595 ± 24% -67.4% 3928911 ± 0% numa-numastat.node0.other_node
64592910 ± 10% -66.5% 21635175 ± 0% numa-numastat.node1.local_node
12041716 ± 24% -67.5% 3919210 ± 0% numa-numastat.node1.numa_foreign
64601023 ± 10% -66.5% 21639833 ± 0% numa-numastat.node1.numa_hit
4730 ± 13% +290.9% 18491 ± 0% meminfo.Inactive(file)
12978 ± 8% +46.3% 18985 ± 0% meminfo.Mapped
703327 ± 9% +72.4% 1212584 ± 0% meminfo.MemAvailable
732344 ± 8% +65.0% 1208500 ± 0% meminfo.MemFree
99286 ± 4% +30.3% 129348 ± 0% meminfo.SReclaimable
3920 ± 21% +332.5% 16955 ± 0% meminfo.Shmem
206164 ± 2% +14.7% 236528 ± 0% meminfo.Slab
1113 ± 10% +23.6% 1377 ± 0% meminfo.SwapCached
47130509 ± 3% +53.1% 72150055 ± 0% meminfo.SwapFree
1012 ± 12% +60.9% 1628 ± 0% slabinfo.blkdev_requests.active_objs
1012 ± 12% +60.9% 1628 ± 0% slabinfo.blkdev_requests.num_objs
1531 ± 5% +12.5% 1722 ± 0% slabinfo.mnt_cache.active_objs
1531 ± 5% +12.5% 1722 ± 0% slabinfo.mnt_cache.num_objs
9719 ± 9% -16.8% 8087 ± 0% slabinfo.proc_inode_cache.num_objs
92960 ± 6% +69.6% 157683 ± 0% slabinfo.radix_tree_node.active_objs
9336 ± 9% +35.2% 12624 ± 0% slabinfo.radix_tree_node.active_slabs
95203 ± 6% +66.0% 158075 ± 0% slabinfo.radix_tree_node.num_objs
9336 ± 9% +35.2% 12624 ± 0% slabinfo.radix_tree_node.num_slabs
310.35 ± 0% -100.0% 0.00 ± -1% time.elapsed_time
310.35 ± 0% -100.0% 0.00 ± -1% time.elapsed_time.max
600.00 ± 27% -100.0% 0.00 ± -1% time.file_system_inputs
234551 ± 6% -100.0% 0.00 ± -1% time.involuntary_context_switches
44654748 ± 9% -100.0% 0.00 ± -1% time.major_page_faults
2442686 ± 11% -100.0% 0.00 ± -1% time.maximum_resident_set_size
34477365 ± 0% -100.0% 0.00 ± -1% time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% time.page_size
1595 ± 0% -100.0% 0.00 ± -1% time.percent_of_cpu_this_job_got
4935 ± 0% -100.0% 0.00 ± -1% time.system_time
19.08 ± 6% -100.0% 0.00 ± -1% time.user_time
390.50 ± 34% -100.0% 0.00 ± -1% time.voluntary_context_switches
914507 ± 7% -13.3% 792912 ± 0% numa-meminfo.node0.Active
913915 ± 7% -13.5% 790259 ± 0% numa-meminfo.node0.Active(anon)
592.00 ± 31% +348.1% 2653 ± 0% numa-meminfo.node0.Active(file)
1217059 ± 7% -13.7% 1049893 ± 0% numa-meminfo.node0.AnonPages
306384 ± 7% -12.0% 269631 ± 0% numa-meminfo.node0.Inactive
304389 ± 7% -14.4% 260426 ± 0% numa-meminfo.node0.Inactive(anon)
1995 ± 8% +361.3% 9204 ± 0% numa-meminfo.node0.Inactive(file)
5801 ± 4% +16.7% 6772 ± 0% numa-meminfo.node0.Mapped
32196 ± 7% +36.4% 43932 ± 0% numa-meminfo.node0.MemFree
55651 ± 5% +10.6% 61563 ± 0% numa-meminfo.node0.SUnreclaim
2966 ± 15% +232.9% 9875 ± 0% numa-meminfo.node1.Inactive(file)
7446 ± 13% +67.7% 12486 ± 0% numa-meminfo.node1.Mapped
679948 ± 6% +76.7% 1201231 ± 0% numa-meminfo.node1.MemFree
66811 ± 7% +48.5% 99246 ± 0% numa-meminfo.node1.SReclaimable
51227 ± 5% -11.0% 45616 ± 0% numa-meminfo.node1.SUnreclaim
3090 ± 39% +415.6% 15932 ± 0% numa-meminfo.node1.Shmem
118039 ± 3% +22.7% 144863 ± 0% numa-meminfo.node1.Slab
0.00 ± -1% +Inf% 1.58 ±-63% perf-profile.cycles-pp.__alloc_pages_slowpath.constprop.93.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process._do_fork
0.00 ± -1% +Inf% 26.40 ± -3% perf-profile.cycles-pp.__alloc_pages_slowpath.constprop.93.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async.read_swap_cache_async
0.00 ± -1% +Inf% 39.64 ± -2% perf-profile.cycles-pp.__alloc_pages_slowpath.constprop.93.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
5.20 ±140% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process
25.02 ± 10% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async
38.03 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
0.00 ± -1% +Inf% 1.59 ±-62% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_kmem_pages_node
0.00 ± -1% +Inf% 65.24 ± -1% perf-profile.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma
5.20 ±140% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node
63.09 ± 8% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma
0.00 ± -1% +Inf% 67.08 ± -1% perf-profile.cycles-pp.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
69.00 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask
0.00 ± -1% +Inf% 66.87 ± -1% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
5.20 ±140% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process._do_fork
25.05 ± 11% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async.read_swap_cache_async
38.06 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
0.00 ± -1% +Inf% 1.59 ±-62% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process
0.00 ± -1% +Inf% 26.22 ± -3% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.__read_swap_cache_async
0.00 ± -1% +Inf% 39.01 ± -2% perf-profile.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
228466 ± 7% -13.4% 197776 ± 0% numa-vmstat.node0.nr_active_anon
147.50 ± 31% +334.6% 641.00 ± 0% numa-vmstat.node0.nr_active_file
304255 ± 7% -13.6% 262822 ± 0% numa-vmstat.node0.nr_anon_pages
8062 ± 8% +34.3% 10829 ± 0% numa-vmstat.node0.nr_free_pages
76095 ± 7% -14.3% 65250 ± 0% numa-vmstat.node0.nr_inactive_anon
498.00 ± 8% +347.8% 2230 ± 0% numa-vmstat.node0.nr_inactive_file
1466 ± 5% +15.0% 1686 ± 0% numa-vmstat.node0.nr_mapped
13912 ± 5% +10.6% 15390 ± 0% numa-vmstat.node0.nr_slab_unreclaimable
7585474 ± 5% -73.6% 2005989 ± 0% numa-vmstat.node0.nr_vmscan_write
7585495 ± 5% -73.6% 2006038 ± 0% numa-vmstat.node0.nr_written
2042806 ± 73% -60.3% 810553 ± 0% numa-vmstat.node0.numa_hit
1973969 ± 76% -62.6% 737625 ± 0% numa-vmstat.node0.numa_local
6640606 ± 22% -72.4% 1834872 ± 0% numa-vmstat.node0.numa_miss
6709443 ± 22% -71.6% 1907800 ± 0% numa-vmstat.node0.numa_other
169806 ± 5% +71.9% 291868 ± 0% numa-vmstat.node1.nr_free_pages
740.75 ± 15% +223.1% 2393 ± 0% numa-vmstat.node1.nr_inactive_file
1860 ± 13% +69.4% 3153 ± 0% numa-vmstat.node1.nr_mapped
767.88 ± 39% +415.4% 3958 ± 0% numa-vmstat.node1.nr_shmem
16698 ± 7% +46.9% 24534 ± 0% numa-vmstat.node1.nr_slab_reclaimable
12806 ± 5% -10.9% 11405 ± 0% numa-vmstat.node1.nr_slab_unreclaimable
27889038 ± 6% -68.4% 8818477 ± 0% numa-vmstat.node1.nr_vmscan_write
27889106 ± 6% -68.4% 8818479 ± 0% numa-vmstat.node1.nr_written
6640458 ± 22% -72.4% 1834609 ± 0% numa-vmstat.node1.numa_foreign
38283180 ± 9% -71.4% 10962630 ± 0% numa-vmstat.node1.numa_hit
38265243 ± 9% -71.4% 10948754 ± 0% numa-vmstat.node1.numa_local
539498 ± 6% -66.2% 182224 ± 0% proc-vmstat.allocstall
144.38 ± 22% -96.5% 5.00 ±-20% proc-vmstat.compact_fail
15889726 ± 25% -87.2% 2027142 ± 0% proc-vmstat.compact_free_scanned
7424 ± 21% -95.5% 337.00 ± 0% proc-vmstat.compact_isolated
18421 ±120% -98.3% 310.00 ± 0% proc-vmstat.compact_migrate_scanned
192.00 ± 21% -96.4% 7.00 ±-14% proc-vmstat.compact_stall
49525 ± 43% +154.6% 126090 ± 0% proc-vmstat.kswapd_low_wmark_hit_quickly
17344 ± 4% +73.0% 30013 ± 0% proc-vmstat.nr_dirty_background_threshold
34690 ± 4% +73.0% 60026 ± 0% proc-vmstat.nr_dirty_threshold
180484 ± 4% +67.9% 303083 ± 0% proc-vmstat.nr_free_pages
1227 ± 10% +276.8% 4623 ± 0% proc-vmstat.nr_inactive_file
3303 ± 6% +43.0% 4722 ± 0% proc-vmstat.nr_mapped
1012 ± 17% +321.7% 4270 ± 0% proc-vmstat.nr_shmem
24900 ± 4% +29.6% 32265 ± 0% proc-vmstat.nr_slab_reclaimable
35587470 ± 5% -69.7% 10775004 ± 0% proc-vmstat.nr_vmscan_write
61007414 ± 6% -65.6% 21016129 ± 0% proc-vmstat.nr_written
16970144 ± 12% -37.0% 10686065 ± 0% proc-vmstat.numa_foreign
10074000 ± 1% -45.2% 5519749 ± 0% proc-vmstat.numa_hint_faults
9673661 ± 5% -44.4% 5377833 ± 0% proc-vmstat.numa_hint_faults_local
67528367 ± 6% -67.0% 22278204 ± 0% proc-vmstat.numa_hit
67514451 ± 6% -67.0% 22264292 ± 0% proc-vmstat.numa_local
16969897 ± 12% -37.0% 10686272 ± 0% proc-vmstat.numa_miss
16983813 ± 12% -37.0% 10700184 ± 0% proc-vmstat.numa_other
41943046 ± 1% -43.9% 23535513 ± 0% proc-vmstat.numa_pte_updates
49539 ± 43% +154.5% 126102 ± 0% proc-vmstat.pageoutrun
45300466 ± 9% -79.2% 9418945 ± 0% proc-vmstat.pgactivate
558557 ± 14% -34.1% 367818 ± 0% proc-vmstat.pgalloc_dma
14967174 ± 3% -69.1% 4626484 ± 0% proc-vmstat.pgalloc_dma32
71037855 ± 7% -57.6% 30119030 ± 0% proc-vmstat.pgalloc_normal
62292933 ± 6% -65.2% 21706559 ± 0% proc-vmstat.pgdeactivate
79824509 ± 5% -56.2% 34976920 ± 0% proc-vmstat.pgfault
86163698 ± 6% -68.5% 27162999 ± 0% proc-vmstat.pgfree
44685673 ± 9% -79.4% 9192073 ± 0% proc-vmstat.pgmajfault
13765509 ± 7% -69.7% 4168976 ± 0% proc-vmstat.pgrefill_dma32
48547731 ± 6% -63.8% 17561899 ± 0% proc-vmstat.pgrefill_normal
12122138 ± 7% -69.7% 3675632 ± 0% proc-vmstat.pgscan_direct_dma32
67953310 ± 7% -66.4% 22842830 ± 0% proc-vmstat.pgscan_direct_normal
11915527 ± 10% -79.3% 2460668 ± 0% proc-vmstat.pgscan_kswapd_dma32
15559179 ± 9% -80.7% 2995996 ± 0% proc-vmstat.pgscan_kswapd_normal
8844259 ± 8% -70.8% 2582588 ± 0% proc-vmstat.pgsteal_direct_dma32
43061102 ± 7% -64.0% 15515081 ± 0% proc-vmstat.pgsteal_direct_normal
4732303 ± 6% -69.0% 1469200 ± 0% proc-vmstat.pgsteal_kswapd_dma32
4380170 ± 7% -66.6% 1462100 ± 0% proc-vmstat.pgsteal_kswapd_normal
44709819 ± 9% -79.4% 9217280 ± 0% proc-vmstat.pswpin
61007674 ± 6% -65.6% 21016726 ± 0% proc-vmstat.pswpout
37.61 ± 8% -39.3% 22.83 ± -4% sched_debug.cfs_rq:/.load.avg
884.52 ± 5% -36.7% 559.50 ± 0% sched_debug.cfs_rq:/.load.max
146.88 ± 5% -38.3% 90.64 ± -1% sched_debug.cfs_rq:/.load.stddev
47.93 ± 5% +28.5% 61.60 ± -1% sched_debug.cfs_rq:/.load_avg.avg
1095 ± 10% +52.2% 1667 ± 0% sched_debug.cfs_rq:/.load_avg.max
170.96 ± 7% +39.6% 238.66 ± 0% sched_debug.cfs_rq:/.load_avg.stddev
578829 ± 2% -80.5% 112739 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
2507544 ± 0% -80.8% 482665 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
998179 ± 1% -82.2% 177613 ± 0% sched_debug.cfs_rq:/.min_vruntime.stddev
0.24 ± 2% -37.1% 0.15 ±-654% sched_debug.cfs_rq:/.nr_running.avg
0.41 ± 1% -22.1% 0.32 ±-312% sched_debug.cfs_rq:/.nr_running.stddev
34.69 ± 0% -38.3% 21.40 ± -4% sched_debug.cfs_rq:/.runnable_load_avg.avg
849.33 ± 0% -37.9% 527.50 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
138.61 ± 0% -38.4% 85.43 ± -1% sched_debug.cfs_rq:/.runnable_load_avg.stddev
444145 ± 30% -87.1% 57376 ± 0% sched_debug.cfs_rq:/.spread0.avg
2372869 ± 5% -82.0% 427303 ± 0% sched_debug.cfs_rq:/.spread0.max
998183 ± 1% -82.2% 177613 ± 0% sched_debug.cfs_rq:/.spread0.stddev
242.15 ± 1% -28.8% 172.45 ± 0% sched_debug.cfs_rq:/.util_avg.avg
392.49 ± 0% -21.4% 308.56 ± 0% sched_debug.cfs_rq:/.util_avg.stddev
184988 ± 1% -64.1% 66460 ± 0% sched_debug.cpu.clock.avg
184996 ± 1% -64.1% 66466 ± 0% sched_debug.cpu.clock.max
184978 ± 1% -64.1% 66453 ± 0% sched_debug.cpu.clock.min
5.60 ± 18% -30.8% 3.88 ±-25% sched_debug.cpu.clock.stddev
184988 ± 1% -64.1% 66460 ± 0% sched_debug.cpu.clock_task.avg
184996 ± 1% -64.1% 66466 ± 0% sched_debug.cpu.clock_task.max
184978 ± 1% -64.1% 66453 ± 0% sched_debug.cpu.clock_task.min
5.60 ± 18% -30.8% 3.88 ±-25% sched_debug.cpu.clock_task.stddev
36.54 ± 4% -42.2% 21.11 ± -4% sched_debug.cpu.cpu_load[0].avg
950.98 ± 7% -44.5% 527.50 ± 0% sched_debug.cpu.cpu_load[0].max
151.40 ± 6% -43.7% 85.22 ± -1% sched_debug.cpu.cpu_load[0].stddev
35.91 ± 2% -41.2% 21.10 ± -4% sched_debug.cpu.cpu_load[1].avg
899.77 ± 3% -41.4% 527.50 ± 0% sched_debug.cpu.cpu_load[1].max
145.18 ± 3% -41.3% 85.22 ± -1% sched_debug.cpu.cpu_load[1].stddev
35.61 ± 2% -40.7% 21.12 ± -4% sched_debug.cpu.cpu_load[2].avg
877.87 ± 2% -39.9% 527.50 ± 0% sched_debug.cpu.cpu_load[2].max
142.54 ± 2% -40.2% 85.23 ± -1% sched_debug.cpu.cpu_load[2].stddev
35.37 ± 2% -40.1% 21.20 ± -4% sched_debug.cpu.cpu_load[3].avg
867.60 ± 2% -39.2% 527.50 ± 0% sched_debug.cpu.cpu_load[3].max
141.21 ± 2% -39.6% 85.33 ± -1% sched_debug.cpu.cpu_load[3].stddev
35.16 ± 1% -39.6% 21.24 ± -4% sched_debug.cpu.cpu_load[4].avg
858.88 ± 2% -38.6% 527.50 ± 0% sched_debug.cpu.cpu_load[4].max
140.16 ± 2% -39.0% 85.43 ± -1% sched_debug.cpu.cpu_load[4].stddev
456.75 ± 2% -41.9% 265.40 ± 0% sched_debug.cpu.curr->pid.avg
5331 ± 1% -53.3% 2491 ± 0% sched_debug.cpu.curr->pid.max
912.17 ± 2% -38.3% 562.54 ± 0% sched_debug.cpu.curr->pid.stddev
37.90 ± 7% -39.8% 22.83 ± -4% sched_debug.cpu.load.avg
904.00 ± 7% -38.1% 559.50 ± 0% sched_debug.cpu.load.max
149.42 ± 6% -39.3% 90.64 ± -1% sched_debug.cpu.load.stddev
0.00 ± 5% -31.3% 0.00 ±-4394145% sched_debug.cpu.next_balance.stddev
72445 ± 0% -75.8% 17509 ± 0% sched_debug.cpu.nr_load_updates.avg
155443 ± 0% -77.5% 34902 ± 0% sched_debug.cpu.nr_load_updates.max
19501 ± 33% -61.4% 7530 ± 0% sched_debug.cpu.nr_load_updates.min
46290 ± 1% -80.6% 8995 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.25 ± 3% -34.8% 0.16 ±-626% sched_debug.cpu.nr_running.avg
1.12 ± 6% +33.3% 1.50 ±-66% sched_debug.cpu.nr_running.max
0.42 ± 2% -14.1% 0.36 ±-276% sched_debug.cpu.nr_running.stddev
9841 ± 4% -54.7% 4459 ± 0% sched_debug.cpu.nr_switches.avg
323.71 ± 11% -26.8% 237.00 ± 0% sched_debug.cpu.nr_switches.min
10606 ± 6% -26.9% 7748 ± 0% sched_debug.cpu.nr_switches.stddev
0.00 ± 68% +1100.0% 0.05 ±-2057% sched_debug.cpu.nr_uninterruptible.avg
184975 ± 1% -64.1% 66455 ± 0% sched_debug.cpu_clk
181780 ± 1% -65.3% 63027 ± 0% sched_debug.ktime
0.12 ± 5% +205.5% 0.36 ±-275% sched_debug.rt_rq:/.rt_time.avg
4.33 ± 7% +211.4% 13.48 ± -7% sched_debug.rt_rq:/.rt_time.max
0.60 ± 6% +208.6% 1.85 ±-54% sched_debug.rt_rq:/.rt_time.stddev
184975 ± 1% -64.1% 66455 ± 0% sched_debug.sched_clk


lkp-hsw-ep2: 72 threads Brickland Haswell-EP with 128G memory


[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: vm-scalability
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
heartbeat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor: performance
default-watchdogs:
oom-killer:
watchdog:
commit: faad2185f482578d50d363746006a1b95dde9d0a
model: Brickland Haswell-EP
nr_cpu: 72
memory: 128G
hdd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BB480G6_BTWA5444064C480FGN-part2"
swap_partitions:
rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2BB480G6_BTWA5444064C480FGN-part1"
category: benchmark
transparent_hugepage:
thp_enabled: never
thp_defrag: never
nr_task: 16
boot_params:
bp_memmap: 96G!4G
disk:
nr_pmem: 1
swap:
perf-profile:
delay: 20
vm-scalability:
test: swap-w-rand
kconfig: x86_64-rhel-pmem
queue: bisect
testbox: lkp-hsw-ep2
tbox_group: lkp-hsw-ep2
enqueue_time: 2016-04-22 16:40:02.055365386 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: 5f09876e8980a7faae6038a029704e0b741e85ef
user: lkp
head_commit: 5e3497cca281616e7930b74a0076b7324dcc2057
base_commit: b562e44f507e863c6792946e4e1b1449fbbac85d
branch: linux-next/master
result_root: "/result/vm-scalability/performance-never-never-16-1-swap-w-rand/lkp-hsw-ep2/debian-x86_64-2015-02-07.cgz/x86_64-rhel-pmem/gcc-4.9/faad2185f482578d50d363746006a1b95dde9d0a/0"
job_file: "/lkp/scheduled/lkp-hsw-ep2/bisect_vm-scalability-performance-never-never-16-1-swap-w-rand-debian-x86_64-2015-02-07.cgz-x86_64-rhel-pmem-faad2185f482578d50d363746006a1b95dde9d0a-20160422-79893-kg4uvz-0.yaml"
max_uptime: 1500
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-hsw-ep2/bisect_vm-scalability-performance-never-never-16-1-swap-w-rand-debian-x86_64-2015-02-07.cgz-x86_64-rhel-pmem-faad2185f482578d50d363746006a1b95dde9d0a-20160422-79893-kg4uvz-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel-pmem
- branch=linux-next/master
- commit=faad2185f482578d50d363746006a1b95dde9d0a
- BOOT_IMAGE=/pkg/linux/x86_64-rhel-pmem/gcc-4.9/faad2185f482578d50d363746006a1b95dde9d0a/vmlinuz-4.5.0-02728-gfaad218
- memmap=96G!4G
- max_uptime=1500
- RESULT_ROOT=/result/vm-scalability/performance-never-never-16-1-swap-w-rand/lkp-hsw-ep2/debian-x86_64-2015-02-07.cgz/x86_64-rhel-pmem/gcc-4.9/faad2185f482578d50d363746006a1b95dde9d0a/0
- LKP_SERVER=inn
- |2-


earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal

rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel-pmem/gcc-4.9/faad2185f482578d50d363746006a1b95dde9d0a/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/swap.cgz,/lkp/benchmarks/vm-scalability.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel-pmem/gcc-4.9/faad2185f482578d50d363746006a1b95dde9d0a/linux-headers.cgz"
kernel: "/pkg/linux/x86_64-rhel-pmem/gcc-4.9/faad2185f482578d50d363746006a1b95dde9d0a/vmlinuz-4.5.0-02728-gfaad218"
dequeue_time: 2016-04-22 16:59:45.398950966 +08:00
job_state: OOM
loadavg: 16.33 5.86 2.11 18/821 3328
start_time: '1461315634'
end_time: '1461315693'
version: "/lkp/lkp/.src-20160422-165027"
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu48/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu49/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu50/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu51/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu52/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu53/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu54/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu55/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu56/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu57/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu58/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu59/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu60/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu61/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu62/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu63/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu64/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu65/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu66/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu67/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu68/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu69/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu70/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu71/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-04-22 17:00:33 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-04-22 17:00:34 mount -t tmpfs -o size=100% vm-scalability-tmp /tmp/vm-scalability-tmp
2016-04-22 17:00:34 truncate -s 33615351808 /tmp/vm-scalability-tmp/vm-scalability.img
2016-04-22 17:00:34 mkfs.xfs -q /tmp/vm-scalability-tmp/vm-scalability.img
2016-04-22 17:00:34 mount -o loop /tmp/vm-scalability-tmp/vm-scalability.img /tmp/vm-scalability-tmp/vm-scalability
2016-04-22 17:00:34 ./case-swap-w-rand
2016-04-22 17:00:34 ./usemem --runtime 300 -n 16 --random 6368538624
\
 
 \ /
  Last update: 2016-04-27 05:41    [W:0.085 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site