Messages in this thread | | | From | kernel test robot <> | Subject | [lkp] [vfs] f3f86e33dc: -5.3% will-it-scale.per_process_ops | Date | Wed, 18 Nov 2015 14:44:07 +0800 |
| |
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit f3f86e33dc3da437fa4f204588ce7c78ea756982 ("vfs: Fix pathological performance case for __alloc_fd()")
========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test: ivb42/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/dup1
commit: 8a28d67457b613258aa0578ccece206d166f2b9f f3f86e33dc3da437fa4f204588ce7c78ea756982
8a28d67457b61325 f3f86e33dc3da437fa4f204588 ---------------- -------------------------- %stddev %change %stddev \ | \ 5994379 ± 0% -5.3% 5678711 ± 0% will-it-scale.per_process_ops 1440545 ± 0% -5.1% 1367766 ± 2% will-it-scale.per_thread_ops 0.57 ± 0% -5.9% 0.54 ± 0% will-it-scale.scalability 4.47 ± 2% -3.1% 4.33 ± 1% turbostat.RAMWatt 59880 ± 5% -13.1% 52055 ± 11% cpuidle.C1-IVT.usage 597.50 ± 4% -19.7% 479.50 ± 16% cpuidle.POLL.usage 15756223 ± 0% +367.7% 73688311 ± 84% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 35858260 ± 0% +113.3% 76474871 ± 77% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 1560 ±171% +300.1% 6241 ± 0% numa-numastat.node0.other_node 3101 ± 99% -99.7% 9.50 ± 67% numa-numastat.node1.other_node 2980 ± 3% -13.4% 2582 ± 1% slabinfo.kmalloc-2048.active_objs 3139 ± 3% -12.5% 2746 ± 1% slabinfo.kmalloc-2048.num_objs 5018 ± 14% -50.4% 2487 ± 13% numa-vmstat.node0.nr_active_anon 3121 ± 31% -77.6% 700.00 ±132% numa-vmstat.node0.nr_shmem 3349 ± 20% +76.6% 5916 ± 7% numa-vmstat.node1.nr_active_anon 1210 ± 80% +200.8% 3640 ± 25% numa-vmstat.node1.nr_shmem 70442 ± 5% -12.7% 61484 ± 0% numa-meminfo.node0.Active 20079 ± 14% -50.4% 9954 ± 13% numa-meminfo.node0.Active(anon) 12487 ± 31% -77.6% 2801 ±132% numa-meminfo.node0.Shmem 61970 ± 5% +18.0% 73096 ± 1% numa-meminfo.node1.Active 13402 ± 20% +76.6% 23671 ± 7% numa-meminfo.node1.Active(anon) 4843 ± 80% +200.7% 14564 ± 25% numa-meminfo.node1.Shmem 1999660 ± 2% -9.1% 1817792 ± 6% sched_debug.cfs_rq[0]:/.min_vruntime 814.25 ± 6% -11.9% 717.00 ± 10% sched_debug.cfs_rq[0]:/.util_avg -220009 ±-25% -83.5% -36294 ±-314% sched_debug.cfs_rq[10]:/.spread0 -220410 ±-25% -82.7% -38205 ±-290% sched_debug.cfs_rq[11]:/.spread0 -868154 ± -5% -20.7% -688065 ±-16% sched_debug.cfs_rq[14]:/.spread0 13.00 ± 0% +105.8% 26.75 ± 61% sched_debug.cfs_rq[15]:/.load -952278 ±-16% -27.9% -687025 ±-16% sched_debug.cfs_rq[16]:/.spread0 -876660 ± -6% -21.8% -685915 ±-15% sched_debug.cfs_rq[17]:/.spread0 -869841 ± -6% -20.5% -691511 ±-15% sched_debug.cfs_rq[18]:/.spread0 -872906 ± -6% -21.0% -689435 ±-15% sched_debug.cfs_rq[19]:/.spread0 -220042 ±-24% -82.8% -37798 ±-282% sched_debug.cfs_rq[1]:/.spread0 -870736 ± -6% -20.9% -689178 ±-16% sched_debug.cfs_rq[20]:/.spread0 -870782 ± -5% -20.6% -691440 ±-16% sched_debug.cfs_rq[21]:/.spread0 12.50 ± 12% +20.0% 15.00 ± 8% sched_debug.cfs_rq[23]:/.load_avg -947289 ±-16% -27.8% -684292 ±-16% sched_debug.cfs_rq[23]:/.spread0 12.50 ± 12% +20.0% 15.00 ± 8% sched_debug.cfs_rq[23]:/.tg_load_avg_contrib 424.00 ± 13% +27.4% 540.00 ± 9% sched_debug.cfs_rq[23]:/.util_avg -180921 ±-30% -100.2% 424.29 ±26645% sched_debug.cfs_rq[25]:/.spread0 -179335 ±-30% -82.3% -31706 ±-346% sched_debug.cfs_rq[26]:/.spread0 -180972 ±-30% -100.1% 163.84 ±68609% sched_debug.cfs_rq[27]:/.spread0 -179636 ±-30% -100.4% 736.15 ±15384% sched_debug.cfs_rq[28]:/.spread0 -180380 ±-30% -101.1% 1963 ±5772% sched_debug.cfs_rq[29]:/.spread0 26.00 ± 3% -18.3% 21.25 ± 22% sched_debug.cfs_rq[2]:/.load 29.50 ± 9% -21.2% 23.25 ± 23% sched_debug.cfs_rq[2]:/.runnable_load_avg -211354 ±-27% -97.3% -5780 ±-2383% sched_debug.cfs_rq[2]:/.spread0 762.25 ± 6% -17.1% 632.00 ± 11% sched_debug.cfs_rq[2]:/.util_avg -179346 ±-31% -101.0% 1767 ±6351% sched_debug.cfs_rq[30]:/.spread0 -182129 ±-30% -99.9% -200.51 ±-56625% sched_debug.cfs_rq[31]:/.spread0 -178388 ±-30% -99.9% -162.33 ±-69718% sched_debug.cfs_rq[32]:/.spread0 -178678 ±-30% -100.0% -67.48 ±-166628% sched_debug.cfs_rq[33]:/.spread0 -177514 ±-30% -100.1% 200.37 ±56326% sched_debug.cfs_rq[34]:/.spread0 -178339 ±-29% -101.6% 2870 ±3873% sched_debug.cfs_rq[35]:/.spread0 -795803 ± -8% -34.5% -521305 ±-40% sched_debug.cfs_rq[37]:/.spread0 -783897 ± -6% -22.6% -607100 ±-18% sched_debug.cfs_rq[38]:/.spread0 3.00 ± 0% +250.0% 10.50 ± 40% sched_debug.cfs_rq[39]:/.load_avg -784040 ± -6% -33.2% -523669 ±-39% sched_debug.cfs_rq[39]:/.spread0 3.00 ± 0% +250.0% 10.50 ± 40% sched_debug.cfs_rq[39]:/.tg_load_avg_contrib 173.75 ± 4% +36.8% 237.75 ± 30% sched_debug.cfs_rq[39]:/.util_avg -220092 ±-24% -82.4% -38783 ±-288% sched_debug.cfs_rq[3]:/.spread0 -783338 ± -6% -22.8% -604971 ±-18% sched_debug.cfs_rq[41]:/.spread0 -784423 ± -6% -23.1% -603402 ±-17% sched_debug.cfs_rq[42]:/.spread0 -785872 ± -6% -23.0% -605005 ±-18% sched_debug.cfs_rq[43]:/.spread0 -782962 ± -6% -22.9% -603838 ±-19% sched_debug.cfs_rq[44]:/.spread0 -783170 ± -6% -23.2% -601383 ±-18% sched_debug.cfs_rq[45]:/.spread0 -784950 ± -6% -23.2% -602937 ±-18% sched_debug.cfs_rq[46]:/.spread0 32.25 ± 35% -24.8% 24.25 ± 1% sched_debug.cfs_rq[4]:/.load -217411 ±-24% -83.2% -36433 ±-300% sched_debug.cfs_rq[4]:/.spread0 -219424 ±-25% -83.0% -37233 ±-299% sched_debug.cfs_rq[5]:/.spread0 -219112 ±-25% -82.4% -38536 ±-289% sched_debug.cfs_rq[6]:/.spread0 -218643 ±-24% -82.8% -37629 ±-298% sched_debug.cfs_rq[7]:/.spread0 -220909 ±-24% -85.0% -33175 ±-350% sched_debug.cfs_rq[8]:/.spread0 -220076 ±-25% -85.9% -31115 ±-337% sched_debug.cfs_rq[9]:/.spread0 89160 ± 6% -8.7% 81395 ± 7% sched_debug.cpu#0.nr_load_updates -2.75 ±-126% -172.7% 2.00 ±136% sched_debug.cpu#12.nr_uninterruptible 16563 ± 21% -39.6% 10009 ± 9% sched_debug.cpu#13.nr_switches 16901 ± 21% -34.5% 11064 ± 9% sched_debug.cpu#13.sched_count 6432 ± 27% -47.2% 3396 ± 38% sched_debug.cpu#13.sched_goidle 7244 ± 14% -45.3% 3961 ± 39% sched_debug.cpu#14.sched_goidle 13.00 ± 0% +105.8% 26.75 ± 61% sched_debug.cpu#15.load 1554 ± 21% +62.9% 2531 ± 30% sched_debug.cpu#16.ttwu_local 6965 ± 21% +48.0% 10308 ± 17% sched_debug.cpu#18.sched_count 28.25 ± 2% -17.7% 23.25 ± 23% sched_debug.cpu#2.cpu_load[4] 26.00 ± 3% -18.3% 21.25 ± 22% sched_debug.cpu#2.load 2703 ± 9% +11.5% 3014 ± 8% sched_debug.cpu#24.curr->pid 420.00 ± 27% -34.1% 276.75 ± 30% sched_debug.cpu#25.sched_goidle 247.00 ± 24% +55.7% 384.50 ± 26% sched_debug.cpu#27.sched_goidle -2.25 ±-79% -211.1% 2.50 ± 82% sched_debug.cpu#30.nr_uninterruptible 715.75 ± 46% -44.2% 399.50 ± 35% sched_debug.cpu#32.ttwu_count 133.50 ± 22% +99.4% 266.25 ± 29% sched_debug.cpu#33.ttwu_local 1212 ± 47% -46.7% 646.25 ± 25% sched_debug.cpu#35.nr_switches 506.50 ± 46% -51.6% 245.00 ± 30% sched_debug.cpu#35.sched_goidle 32.25 ± 35% -24.8% 24.25 ± 1% sched_debug.cpu#4.load 2973 ± 46% +161.2% 7766 ± 47% sched_debug.cpu#40.nr_switches 3062 ± 46% +156.1% 7843 ± 47% sched_debug.cpu#40.sched_count 1219 ± 55% +155.9% 3121 ± 60% sched_debug.cpu#40.sched_goidle 1429 ± 2% +49.2% 2131 ± 41% sched_debug.cpu#44.curr->pid 1.75 ± 93% -57.1% 0.75 ±145% sched_debug.cpu#45.nr_uninterruptible 433.75 ± 32% +75.1% 759.50 ± 14% sched_debug.cpu#6.ttwu_count
ivb42: Ivytown Ivy Bridge-EP Memory: 64G
will-it-scale.per_process_ops
6.05e+06 ++---------------------------------------------------------------+ *.*.**.*.*.**.*.*. .*. *. .* .*.**. | 6e+06 ++ **.*.*.** *.* *.*.**.*.* *.* *.*.**.*.* 5.95e+06 ++ | | | 5.9e+06 ++ | 5.85e+06 ++ | | | 5.8e+06 ++ | 5.75e+06 ++ | | O OO O | 5.7e+06 O+ OO O O O O O OO O O | 5.65e+06 ++O O O O O OO O | | O | 5.6e+06 ++---------------------------------------------------------------+
[*] bisect-good sample [O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml
Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance.
Thanks, Ying Huang --- LKP_SERVER: inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 testcase: will-it-scale default-monitors: wait: activate-monitor kmsg: uptime: iostat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: interval: 10 meminfo: slabinfo: interrupts: lock_stat: latency_stats: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 60 cpufreq_governor: performance default-watchdogs: oom-killer: watchdog: commit: 8005c49d9aea74d382f474ce11afbbc7d7130bec model: Ivytown Ivy Bridge-EP nr_cpu: 48 memory: 64G swap_partitions: LABEL=SWAP rootfs_partition: LABEL=LKP-ROOTFS category: benchmark perf-profile: freq: 800 will-it-scale: test: dup1 queue: cyclic testbox: ivb42 tbox_group: ivb42 kconfig: x86_64-rhel enqueue_time: 2015-11-17 08:27:36.309490411 +08:00 id: 7363031303e3969c581a84334a46962a2dffa4c3 user: lkp compiler: gcc-4.9 head_commit: a25498f782e28fcbd76b93cd9325b9e18c1c829a base_commit: 8005c49d9aea74d382f474ce11afbbc7d7130bec branch: linux-devel/devel-hourly-2015111705 kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/8005c49d9aea74d382f474ce11afbbc7d7130bec/vmlinuz-4.4.0-rc1" rootfs: debian-x86_64-2015-02-07.cgz result_root: "/result/will-it-scale/performance-dup1/ivb42/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/8005c49d9aea74d382f474ce11afbbc7d7130bec/0" job_file: "/lkp/scheduled/ivb42/cyclic_will-it-scale-performance-dup1-x86_64-rhel-CYCLIC_BASE-8005c49d9aea74d382f474ce11afbbc7d7130bec-20151117-76241-5xtdl7-0.yaml" dequeue_time: 2015-11-17 09:51:21.843559599 +08:00 max_uptime: 1500 initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz" bootloader_append: - root=/dev/ram0 - user=lkp - job=/lkp/scheduled/ivb42/cyclic_will-it-scale-performance-dup1-x86_64-rhel-CYCLIC_BASE-8005c49d9aea74d382f474ce11afbbc7d7130bec-20151117-76241-5xtdl7-0.yaml - ARCH=x86_64 - kconfig=x86_64-rhel - branch=linux-devel/devel-hourly-2015111705 - commit=8005c49d9aea74d382f474ce11afbbc7d7130bec - BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/8005c49d9aea74d382f474ce11afbbc7d7130bec/vmlinuz-4.4.0-rc1 - max_uptime=1500 - RESULT_ROOT=/result/will-it-scale/performance-dup1/ivb42/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/8005c49d9aea74d382f474ce11afbbc7d7130bec/0 - LKP_SERVER=inn - |2-
earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal
rw lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz" modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/8005c49d9aea74d382f474ce11afbbc7d7130bec/modules.cgz" bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/will-it-scale.cgz" job_state: finished loadavg: 41.80 18.86 7.37 1/501 9285 start_time: '1447725123' end_time: '1447725433' version: "/lkp/lkp/.src-20151116-235214" echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor ./runtest.py dup1 25 both 1 12 24 36 48
| |