lkml.org 
[lkml]   [2015]   [Sep]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[lkp] [nfsd] 07d2931094: -6.0% fsmark.files_per_sec
Date
FYI, we noticed the below changes on

git://git.samba.org/jlayton/linux nfsd-4.4
commit 07d29310940bf676822715efd4be3c769cae97c2 ("nfsd: convert nfs4_file->fi_fds array to use nfsd_files")

=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/fs2/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/xfs/nfsv4/9B/400M/fsyncBeforeClose/16d/256fpd

commit:
62b92d7854c66931ad66601ade1e4cc941c0e5ac
07d29310940bf676822715efd4be3c769cae97c2

62b92d7854c66931 07d29310940bf676822715efd4
---------------- --------------------------
%stddev %change %stddev
\ | \
663.20 ± 0% -6.0% 623.20 ± 0% fsmark.files_per_sec
154.58 ± 0% +6.4% 164.47 ± 0% fsmark.time.elapsed_time
154.58 ± 0% +6.4% 164.47 ± 0% fsmark.time.elapsed_time.max
124955 ± 0% -5.3% 118317 ± 0% fsmark.time.involuntary_context_switches
7.00 ± 0% +14.3% 8.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
448628 ± 0% -1.4% 442375 ± 0% fsmark.time.voluntary_context_switches
10022 ± 0% +51.3% 15166 ± 1% proc-vmstat.nr_slab_unreclaimable
41390 ± 0% +9.7% 45425 ± 1% softirqs.SCHED
40093 ± 0% +51.3% 60669 ± 1% meminfo.SUnreclaim
122014 ± 0% +23.1% 150170 ± 1% meminfo.Slab
7.904e+08 ± 0% +19.9% 9.476e+08 ± 1% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_do_close.[nfsv4].__nfs4_close.[nfsv4].nfs4_close_sync.[nfsv4].nfs4_close_context.[nfsv4].__put_nfs_open_context.nfs_file_clear_open_context.nfs_file_release.__fput.____fput.task_work_run
8.165e+08 ± 0% +18.8% 9.698e+08 ± 1% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
61710810 ± 2% -7.2% 57259741 ± 1% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
7.00 ± 0% +14.3% 8.00 ± 0% time.percent_of_cpu_this_job_got
10.79 ± 3% +17.4% 12.66 ± 3% time.system_time
1.15 ± 1% +31.7% 1.51 ± 2% time.user_time
6.81 ± 2% +14.3% 7.79 ± 1% turbostat.%Busy
221.50 ± 0% +10.6% 245.00 ± 0% turbostat.Avg_MHz
9.46 ± 2% +38.4% 13.08 ± 6% turbostat.CPU%c3
9139 ± 0% -7.3% 8470 ± 1% vmstat.io.bo
94161 ± 0% -6.4% 88109 ± 0% vmstat.system.cs
31533 ± 1% -5.9% 29663 ± 0% vmstat.system.in
1.994e+08 ± 2% +20.0% 2.392e+08 ± 3% cpuidle.C3-NHM.time
870510 ± 4% +19.2% 1037726 ± 4% cpuidle.C3-NHM.usage
115687 ± 2% +22.6% 141821 ± 2% cpuidle.C6-NHM.usage
3283 ± 4% -61.4% 1266 ± 10% cpuidle.POLL.usage
863.25 ± 56% +52.0% 1312 ± 8% sched_debug.cfs_rq[3]:/.load_avg
7.25 ± 26% +93.1% 14.00 ± 16% sched_debug.cfs_rq[3]:/.nr_spread_over
34.25 ± 59% +58.4% 54.25 ± 39% sched_debug.cfs_rq[4]:/.util_avg
1143 ± 0% +20.2% 1375 ± 9% sched_debug.cfs_rq[5]:/.exec_clock
171.50 ± 95% +307.9% 699.50 ± 40% sched_debug.cfs_rq[7]:/.load_avg
253.00 ± 60% +177.0% 700.75 ± 40% sched_debug.cfs_rq[7]:/.tg_load_avg_contrib
33.50 ±106% +320.1% 140.75 ± 12% sched_debug.cfs_rq[7]:/.util_avg
1669 ± 3% -13.2% 1449 ± 4% sched_debug.cpu#0.nr_uninterruptible
289.50 ±172% -100.0% 0.00 ± -1% sched_debug.cpu#1.cpu_load[0]
-1212 ± -4% -14.8% -1033 ± -3% sched_debug.cpu#1.nr_uninterruptible
-697.75 ± -4% -16.2% -584.50 ± -9% sched_debug.cpu#2.nr_uninterruptible
868645 ± 93% -94.1% 51606 ± 4% sched_debug.cpu#3.ttwu_count
830065 ± 97% -97.3% 22366 ± 2% sched_debug.cpu#3.ttwu_local
156.50 ± 5% -15.0% 133.00 ± 6% sched_debug.cpu#4.nr_uninterruptible
731718 ± 2% -21.3% 576140 ± 11% sched_debug.cpu#6.avg_idle
104.50 ± 6% -23.4% 80.00 ± 17% sched_debug.cpu#7.nr_uninterruptible
0.00 ± 58% +144.9% 0.01 ± 56% sched_debug.rt_rq[2]:/.rt_time
19017 ± 0% +15.3% 21929 ± 1% slabinfo.buffer_head.active_objs
19017 ± 0% +15.3% 21929 ± 1% slabinfo.buffer_head.num_objs
16492 ± 0% +12.6% 18575 ± 0% slabinfo.kmalloc-128.active_objs
16521 ± 0% +12.6% 18608 ± 0% slabinfo.kmalloc-128.num_objs
5523 ± 1% +391.2% 27131 ± 1% slabinfo.kmalloc-16.active_objs
5523 ± 1% +391.2% 27132 ± 1% slabinfo.kmalloc-16.num_objs
4788 ± 4% +1347.9% 69331 ± 1% slabinfo.kmalloc-192.active_objs
116.00 ± 4% +1324.6% 1652 ± 1% slabinfo.kmalloc-192.active_slabs
4897 ± 4% +1317.5% 69425 ± 1% slabinfo.kmalloc-192.num_objs
116.00 ± 4% +1324.6% 1652 ± 1% slabinfo.kmalloc-192.num_slabs
2199 ± 4% +982.6% 23811 ± 1% slabinfo.kmalloc-256.active_objs
82.75 ± 5% +807.6% 751.00 ± 1% slabinfo.kmalloc-256.active_slabs
2672 ± 5% +799.9% 24047 ± 1% slabinfo.kmalloc-256.num_objs
82.75 ± 5% +807.6% 751.00 ± 1% slabinfo.kmalloc-256.num_slabs
8578 ± 0% +224.0% 27792 ± 1% slabinfo.kmalloc-32.active_objs
67.25 ± 0% +225.7% 219.00 ± 1% slabinfo.kmalloc-32.active_slabs
8691 ± 0% +223.3% 28101 ± 1% slabinfo.kmalloc-32.num_objs
67.25 ± 0% +225.7% 219.00 ± 1% slabinfo.kmalloc-32.num_slabs
1690 ± 2% +19.1% 2013 ± 6% slabinfo.kmalloc-512.num_objs
48504 ± 0% +11.1% 53877 ± 1% slabinfo.kmalloc-64.active_objs
48586 ± 0% +10.9% 53880 ± 1% slabinfo.kmalloc-64.num_objs
35280 ± 0% +14.3% 40314 ± 1% slabinfo.kmalloc-96.active_objs
839.75 ± 0% +14.2% 959.25 ± 1% slabinfo.kmalloc-96.active_slabs
35280 ± 0% +14.3% 40314 ± 1% slabinfo.kmalloc-96.num_objs
839.75 ± 0% +14.2% 959.25 ± 1% slabinfo.kmalloc-96.num_slabs
1306 ± 1% +11.5% 1457 ± 1% slabinfo.mnt_cache.active_objs
1306 ± 1% +11.5% 1457 ± 1% slabinfo.mnt_cache.num_objs
19130 ± 0% +15.3% 22064 ± 1% slabinfo.nfs_inode_cache.active_objs
19130 ± 0% +15.3% 22064 ± 1% slabinfo.nfs_inode_cache.num_objs
19086 ± 0% +15.3% 22012 ± 1% slabinfo.xfs_ili.active_objs
733.50 ± 0% +15.3% 846.00 ± 1% slabinfo.xfs_ili.active_slabs
19086 ± 0% +15.3% 22012 ± 1% slabinfo.xfs_ili.num_objs
733.50 ± 0% +15.3% 846.00 ± 1% slabinfo.xfs_ili.num_slabs
19113 ± 0% +15.3% 22042 ± 1% slabinfo.xfs_inode.active_objs
19113 ± 0% +15.3% 22042 ± 1% slabinfo.xfs_inode.num_objs


vm-lkp-wsx01-8G: qemu-system-x86_64 -enable-kvm -cpu kvm64
Memory: 8G

nhm4: Nehalem
Memory: 4G




cpuidle.POLL.usage

4500 ++-------------------------------------------------------------------+
| * |
4000 ++ : + |
| *.. : + * |
3500 ++ + : *. *..*.. + + .*..|
*..*.*.. .* *..*.*..*..*.* *.. + *.*..*..* + .* *
3000 ++ *. * *. |
| |
2500 ++ |
| |
2000 ++ |
| |
1500 ++ O |
O O O O O O O O O O |
1000 ++-O----O----------O-O-----O----O------------------------------------+


turbostat.Avg_MHz

250 ++--------------------------------------------------------------------+
| O |
245 ++ O O O O O O O O O |
O O O O O O O |
240 ++ |
| |
235 ++ |
| |
230 ++ * |
| + : |
225 *+.*.*..*..*. + : .*.. .*
| * : .*.. .*.*.. .*.*..*..*.*. *..*. *.*. |
220 ++ *..* *. *. *.. .. |
| * |
215 ++--------------------------------------------------------------------+


turbostat.%Busy

8 ++--------------------------------------------------------------------+
O O O O O O O O O O O |
7.8 ++ O O O O O |
7.6 ++ |
| O |
7.4 ++ |
| |
7.2 ++ |
| |
7 ++.*. .*. .*.. .*.. .*
6.8 *+ *..*. *. *..*.*.. *.*..*..*.*..*..*.*. *..*. *.*. |
| .. *.. .. |
6.6 ++ * * |
| |
6.4 ++--------------------------------------------------------------------+


fsmark.files_per_sec

740 ++--------------------------------------------------------------------+
| * |
720 ++ : |
| : : |
700 ++ : : |
| : : |
680 ++ : : |
*..*.*.. .* : .*..*.. .*.. .*.. .*..*.. .*
660 ++ *..* *..* * *..*.*. * *..*.*..*..*.*. |
| |
640 ++ |
| O |
620 O+ O O O O O O O O O O O O O O O |
| |
600 ++--------------------------------------------------------------------+


fsmark.time.elapsed_time

170 ++--------------------------------------------------------------------+
| |
165 O+ O O O O O O O O O O O O O |
| O O O |
| |
160 ++ |
| .*.. |
155 *+.*.*..*..*.* *..*.*.. .*.*..*..*.*.. .*. .*..* *..*.*..*
| : : *. *. *..*. |
150 ++ : : |
| : : |
| : : |
145 ++ * |
| |
140 ++--------------------------------------------------------------------+


fsmark.time.elapsed_time.max

170 ++--------------------------------------------------------------------+
| |
165 O+ O O O O O O O O O O O O O |
| O O O |
| |
160 ++ |
| .*.. |
155 *+.*.*..*..*.* *..*.*.. .*.*..*..*.*.. .*. .*..* *..*.*..*
| : : *. *. *..*. |
150 ++ : : |
| : : |
| : : |
145 ++ * |
| |
140 ++--------------------------------------------------------------------+


fsmark.time.involuntary_context_switches

126000 ++---*------------------------------------------------*------------+
*..* *.*..*.*.. .*.. *..*.. .*.*..*.. + * *.*..*
125000 ++ *..* + *.*..*.*. * + .. |
124000 ++ * * |
| |
123000 ++ |
122000 ++ |
| |
121000 ++ |
120000 ++ |
| |
119000 ++ O |
118000 O+ O O O O O O O O O O |
| O O O O O |
117000 ++-----------------------------------------------------------------+


time.user_time

1.6 ++-------------------------O----O-------------------------------------+
O O O |
1.5 ++ O O O O O O O O |
| O O O O |
| |
1.4 ++ |
| |
1.3 ++ |
| .*.. .*.. |
1.2 ++.*. .* .*..*. .*..*..* *.. |
*. *..*. *. *.. .*..*..* *.. .*.. |
| .* *.*. *.*..*
1.1 ++ *. |
| |
1 ++--------------------------------------------------------------------+


vmstat.io.bo

10200 ++--------------*---------------------------------------------------+
10000 ++ : |
| : : |
9800 ++ : : |
9600 ++ : : |
| : : |
9400 ++ : : |
9200 *+. : : .*.. .*. .*.. .*.. .*.. .*
9000 ++ *.*..*.*..* *..*..* *.*..*..*.*. *. .*. * *.*. |
| * |
8800 ++ |
8600 ++ O O O O |
O O O O O O O O O O O O |
8400 ++ |
8200 ++--------------------------------O---------------------------------+


proc-vmstat.nr_slab_unreclaimable

16000 ++------------------------------------------------------------------+
O O O O O |
15000 ++ O O O O O O O O O O O O |
| |
14000 ++ |
| |
13000 ++ |
| |
12000 ++ |
| |
11000 ++ |
| .*. |
10000 *+.*.*..*.*..*. *..*..*.*..*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*
| |
9000 ++------------------------------------------------------------------+


meminfo.Slab

155000 ++-----------------------------------------------------------------+
| O O |
150000 O+ O O O O O O O O O O O O O |
| O |
145000 ++ |
| |
140000 ++ |
| |
135000 ++ |
| |
130000 ++ |
| *.. |
125000 ++ + |
| .*.*..* *..*. .*.*.. .*.*..*. .*. .*.*..*.*..*.*..|
120000 *+-*-*-----------------*-------*---------*----*--*-----------------*


meminfo.SUnreclaim

65000 ++------------------------------------------------------------------+
| |
60000 O+ O O O O O O O O O O O O O O O |
| O |
| |
55000 ++ |
| |
50000 ++ |
| |
45000 ++ |
| |
| .*.*.. .*. |
40000 *+.*.*..*.*..*. *..*.*..*.*..*. *..*.*..*..*.*..*..*.*..*.*..*
| |
35000 ++------------------------------------------------------------------+


slabinfo.kmalloc-256.active_objs

25000 ++------O------------------------------O----------------------------+
O O O O O O O O O O O O O O O |
| |
20000 ++ |
| |
| |
15000 ++ |
| |
10000 ++ |
| |
| |
5000 ++ |
| |
*..*.*..*.*..*..*.*..*..*.*..*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*
0 ++------------------------------------------------------------------+


slabinfo.kmalloc-256.num_objs

25000 O+-O----O-O--O-------O--O-O--O-O--O----O----------------------------+
| O O O O O |
| |
20000 ++ |
| |
| |
15000 ++ |
| |
10000 ++ |
| |
| |
5000 ++ |
*..*.*.. .*. .*.. .*.. .*. .*.*..|
| *.*..*. *. *.*..*.*..*..* *.*..*. *..*..*.*. *
0 ++------------------------------------------------------------------+


slabinfo.kmalloc-256.active_slabs

800 ++--------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O |
700 ++ |
600 ++ |
| |
500 ++ |
| |
400 ++ |
| |
300 ++ |
200 ++ |
| |
100 ++ .*.. .*.. .*. .*.. .*.*..|
*..* *..*.*. *. *..*..*.*..*..* *..*.*..*..*..*.*..*. *
0 ++--------------------------------------------------------------------+


slabinfo.kmalloc-256.num_slabs

800 ++--------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O |
700 ++ |
600 ++ |
| |
500 ++ |
| |
400 ++ |
| |
300 ++ |
200 ++ |
| |
100 ++ .*.. .*.. .*. .*.. .*.*..|
*..* *..*.*. *. *..*..*.*..*..* *..*.*..*..*..*.*..*. *
0 ++--------------------------------------------------------------------+


slabinfo.kmalloc-192.active_objs

80000 ++------------------------------------------------------------------+
| |
70000 O+ O O O O O O O O O O O O O O O O |
60000 ++ |
| |
50000 ++ |
| |
40000 ++ |
| |
30000 ++ |
20000 ++ |
| |
10000 ++ |
*..*.*..*.*..*..*.*..*..*.*..*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*
0 ++------------------------------------------------------------------+


slabinfo.kmalloc-192.num_objs

80000 ++------------------------------------------------------------------+
| |
70000 O+ O O O O O O O O O O O O O O O O |
60000 ++ |
| |
50000 ++ |
| |
40000 ++ |
| |
30000 ++ |
20000 ++ |
| |
10000 ++ |
*..*.*..*.*..*..*.*..*..*.*..*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*
0 ++------------------------------------------------------------------+


slabinfo.kmalloc-192.active_slabs

1800 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O |
1600 ++ |
1400 ++ |
| |
1200 ++ |
1000 ++ |
| |
800 ++ |
600 ++ |
| |
400 ++ |
200 ++ |
*..*.*..*..*.*..*..*.*..*..*.*..*..*.*..*.*..*..*.*..*..*.*..*..*.*..*
0 ++-------------------------------------------------------------------+


slabinfo.kmalloc-192.num_slabs

1800 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O |
1600 ++ |
1400 ++ |
| |
1200 ++ |
1000 ++ |
| |
800 ++ |
600 ++ |
| |
400 ++ |
200 ++ |
*..*.*..*..*.*..*..*.*..*..*.*..*..*.*..*.*..*..*.*..*..*.*..*..*.*..*
0 ++-------------------------------------------------------------------+


slabinfo.kmalloc-32.active_objs

55000 ++------------------------------------------------------------------+
50000 O+ O O O O |
| |
45000 ++ |
40000 ++ |
| |
35000 ++ |
30000 ++ O O O O O O |
25000 ++ O O O O O O |
| |
20000 ++ |
15000 ++ |
| |
10000 ++.*.*..*.*..*..*.*..*..*. .*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*
5000 *+------------------------*-----------------------------------------+


slabinfo.kmalloc-32.num_objs

55000 ++------------------------------------------------------------------+
50000 O+ O O O O |
| |
45000 ++ |
40000 ++ |
| |
35000 ++ |
30000 ++ O O O O O O O O |
25000 ++ O O O O |
| |
20000 ++ |
15000 ++ |
| |
10000 ++.*.*..*.*..*..*.*..*..*.*..*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*
5000 *+------------------------------------------------------------------+


slabinfo.kmalloc-32.active_slabs

450 ++--------------------------------------------------------------------+
| |
400 O+ O O O O |
350 ++ |
| |
300 ++ |
| |
250 ++ |
| O O O O O O O O O O O |
200 ++ O |
150 ++ |
| |
100 ++ |
| .*.*..*..*.*..*..*..*. .*..*.*..*..*.*..*..*..*.*..*..*.*..*
50 *+----------------------*--*--*-*-------------------------------------+


slabinfo.kmalloc-32.num_slabs

450 ++--------------------------------------------------------------------+
| |
400 O+ O O O O |
350 ++ |
| |
300 ++ |
| |
250 ++ |
| O O O O O O O O O O O |
200 ++ O |
150 ++ |
| |
100 ++ |
| .*.*..*..*.*..*..*..*. .*..*.*..*..*.*..*..*..*.*..*..*.*..*
50 *+----------------------*--*--*-*-------------------------------------+

[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: fsmark
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor: performance
default-watchdogs:
oom-killer:
watchdog:
commit: f0fa5bea9af16950b794d509e65e6f9e8f5778b2
model: Nehalem
nr_cpu: 8
memory: 4G
hdd_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part1"
swap_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part2"
rootfs_partition: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part3"
netconsole_port: 6649
category: benchmark
iterations: 1x
nr_threads: 32t
disk: 1HDD
fs: xfs
fs2: nfsv4
fsmark:
filesize: 9B
test_size: 400M
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
queue: cyclic
testbox: nhm4
tbox_group: nhm4
kconfig: x86_64-rhel
enqueue_time: 2015-09-17 23:40:59.069047553 +08:00
id: 062a4d391cc4a1b01e7c694c74a1691ea540c27d
user: lkp
compiler: gcc-4.9
head_commit: f0fa5bea9af16950b794d509e65e6f9e8f5778b2
base_commit: 6ff33f3902c3b1c5d0db6b1e2c70b6d76fba357f
branch: linux-devel/devel-hourly-2015091908
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/f0fa5bea9af16950b794d509e65e6f9e8f5778b2/vmlinuz-4.3.0-rc1-wl-02314-gf0fa5be"
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/fsmark/performance-1x-32t-1HDD-xfs-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/f0fa5bea9af16950b794d509e65e6f9e8f5778b2/0"
job_file: "/lkp/scheduled/nhm4/cyclic_fsmark-performance-1x-32t-1HDD-xfs-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd-x86_64-rhel-CYCLIC_HEAD-f0fa5bea9af16950b794d509e65e6f9e8f5778b2-20150917-30297-13jbp0j-0.yaml"
dequeue_time: 2015-09-19 09:46:54.157317533 +08:00
max_uptime: 952.08
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/nhm4/cyclic_fsmark-performance-1x-32t-1HDD-xfs-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd-x86_64-rhel-CYCLIC_HEAD-f0fa5bea9af16950b794d509e65e6f9e8f5778b2-20150917-30297-13jbp0j-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2015091908
- commit=f0fa5bea9af16950b794d509e65e6f9e8f5778b2
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/f0fa5bea9af16950b794d509e65e6f9e8f5778b2/vmlinuz-4.3.0-rc1-wl-02314-gf0fa5be
- max_uptime=952
- RESULT_ROOT=/result/fsmark/performance-1x-32t-1HDD-xfs-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd/nhm4/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/f0fa5bea9af16950b794d509e65e6f9e8f5778b2/0
- LKP_SERVER=inn
- |-
libata.force=1.5Gbps

earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal

rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/f0fa5bea9af16950b794d509e65e6f9e8f5778b2/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs2.cgz,/lkp/benchmarks/fsmark.cgz"
job_state: finished
loadavg: 31.52 14.09 5.39 1/184 3144
start_time: '1442627242'
end_time: '1442627407'
version: "/lkp/lkp/.src-20150918-192806"
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
mkfs -t xfs /dev/sdb1
mount -t xfs -o nobarrier,inode64 /dev/sdb1 /fs/sdb1
/etc/init.d/rpcbind start
/etc/init.d/nfs-common start
/etc/init.d/nfs-kernel-server start
mount -t nfs -o vers=4 localhost:/fs/sdb1 /nfs/sdb1
./fs_mark -d /nfs/sdb1/1 -d /nfs/sdb1/2 -d /nfs/sdb1/3 -d /nfs/sdb1/4 -d /nfs/sdb1/5 -d /nfs/sdb1/6 -d /nfs/sdb1/7 -d /nfs/sdb1/8 -d /nfs/sdb1/9 -d /nfs/sdb1/10 -d /nfs/sdb1/11 -d /nfs/sdb1/12 -d /nfs/sdb1/13 -d /nfs/sdb1/14 -d /nfs/sdb1/15 -d /nfs/sdb1/16 -d /nfs/sdb1/17 -d /nfs/sdb1/18 -d /nfs/sdb1/19 -d /nfs/sdb1/20 -d /nfs/sdb1/21 -d /nfs/sdb1/22 -d /nfs/sdb1/23 -d /nfs/sdb1/24 -d /nfs/sdb1/25 -d /nfs/sdb1/26 -d /nfs/sdb1/27 -d /nfs/sdb1/28 -d /nfs/sdb1/29 -d /nfs/sdb1/30 -d /nfs/sdb1/31 -d /nfs/sdb1/32 -D 16 -N 256 -n 3200 -L 1 -S 1 -s 9
\
 
 \ /
  Last update: 2015-09-20 11:21    [W:2.220 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site