lkml.org 
[lkml]   [2016]   [Feb]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[lkp] [ext4] 1f2d779fed: +34.0% fsmark.files_per_sec
Date
FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1f2d779fed21806ffed84aa65617c6125f480e85 ("ext4: optimize group search for inode allocation")


=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/9B/ext4/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-ne04/400M/fsmark

commit:
29c6eaffc8868ef6fa71997d0ea507a02c52712c
1f2d779fed21806ffed84aa65617c6125f480e85

29c6eaffc8868ef6 1f2d779fed21806ffed84aa656
---------------- --------------------------
%stddev %change %stddev
\ | \
1479204 ± 11% +47.7% 2184108 ± 7% fsmark.app_overhead
148.78 ± 3% +34.0% 199.38 ± 2% fsmark.files_per_sec
693.79 ± 3% -26.1% 512.52 ± 2% fsmark.time.elapsed_time
693.79 ± 3% -26.1% 512.52 ± 2% fsmark.time.elapsed_time.max
264.00 ± 0% +11372.0% 30286 ± 15% fsmark.time.file_system_inputs
2.00 ± 0% +100.0% 4.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
408852 ± 1% +19.0% 486551 ± 0% fsmark.time.voluntary_context_switches
42686 ± 4% -9.2% 38753 ± 1% softirqs.RCU
43495 ± 3% -6.4% 40722 ± 4% softirqs.SCHED
717.23 ± 3% -24.2% 543.65 ± 2% uptime.boot
9657 ± 4% -28.5% 6902 ± 3% uptime.idle
3817 ± 1% +26.2% 4819 ± 2% vmstat.system.cs
431.50 ± 1% +31.4% 567.00 ± 4% vmstat.system.in
620517 ± 4% -23.2% 476762 ± 1% numa-numastat.node0.local_node
620519 ± 4% -23.2% 476763 ± 1% numa-numastat.node0.numa_hit
783065 ± 5% -24.2% 593500 ± 3% numa-numastat.node1.local_node
783066 ± 5% -24.2% 593500 ± 3% numa-numastat.node1.numa_hit
36911353 ± 8% +61.1% 59481022 ± 5% cpuidle.C1E-NHM.time
9758 ± 5% +41.5% 13810 ± 2% cpuidle.C1E-NHM.usage
4.406e+08 ± 5% +34.7% 5.934e+08 ± 5% cpuidle.C3-NHM.time
112418 ± 3% +50.3% 168929 ± 1% cpuidle.C3-NHM.usage
1.049e+10 ± 3% -29.3% 7.418e+09 ± 3% cpuidle.C6-NHM.time
81277 ± 3% -20.9% 64279 ± 2% cpuidle.POLL.usage
1059 ± 5% -8.8% 965.75 ± 7% slabinfo.blkdev_requests.num_objs
638.25 ± 4% +34.7% 859.75 ± 8% slabinfo.dquot.active_objs
638.50 ± 4% +38.5% 884.25 ± 7% slabinfo.dquot.num_objs
2026 ± 13% -26.9% 1480 ± 9% slabinfo.jbd2_journal_head.active_objs
2061 ± 12% -25.8% 1529 ± 9% slabinfo.jbd2_journal_head.num_objs
2780 ± 2% +11.5% 3099 ± 3% slabinfo.kmalloc-256.active_objs
0.82 ± 2% +40.4% 1.16 ± 4% turbostat.%Busy
22.25 ± 3% +40.4% 31.25 ± 6% turbostat.Avg_MHz
2.69 ± 5% +49.3% 4.01 ± 6% turbostat.CPU%c1
6.79 ± 2% +73.8% 11.81 ± 8% turbostat.CPU%c3
28.16 ± 3% +21.5% 34.22 ± 3% turbostat.Pkg%pc3
33.39 ± 2% -41.9% 19.41 ± 12% turbostat.Pkg%pc6
693.79 ± 3% -26.1% 512.52 ± 2% time.elapsed_time
693.79 ± 3% -26.1% 512.52 ± 2% time.elapsed_time.max
264.00 ± 0% +11372.0% 30286 ± 15% time.file_system_inputs
2.00 ± 0% +100.0% 4.00 ± 0% time.percent_of_cpu_this_job_got
17.25 ± 1% +21.9% 21.03 ± 2% time.system_time
1.14 ± 2% +33.8% 1.53 ± 12% time.user_time
408852 ± 1% +19.0% 486551 ± 0% time.voluntary_context_switches
1400397 ± 3% -23.9% 1066153 ± 2% proc-vmstat.numa_hit
1400394 ± 3% -23.9% 1066151 ± 2% proc-vmstat.numa_local
247146 ± 4% -23.1% 190110 ± 1% proc-vmstat.pgalloc_dma32
1235243 ± 4% -23.6% 943649 ± 2% proc-vmstat.pgalloc_normal
1461923 ± 4% -25.7% 1086000 ± 2% proc-vmstat.pgfault
1314409 ± 4% -26.9% 961033 ± 2% proc-vmstat.pgfree
132.00 ± 0% +11365.9% 15135 ± 15% proc-vmstat.pgpgin
3926356 ± 18% -38.5% 2413806 ± 16% proc-vmstat.pgpgout
141.50 ± 16% +1078.3% 1667 ± 52% numa-vmstat.node0.nr_shmem
22694 ± 0% -17.7% 18679 ± 11% numa-vmstat.node0.nr_slab_reclaimable
6899 ± 1% -12.3% 6049 ± 1% numa-vmstat.node0.nr_slab_unreclaimable
447024 ± 4% -13.7% 385841 ± 4% numa-vmstat.node0.numa_hit
384765 ± 5% -15.8% 323917 ± 4% numa-vmstat.node0.numa_local
63402 ± 1% +19.2% 75581 ± 11% numa-vmstat.node1.nr_file_pages
2160 ± 0% -71.6% 613.25 ±143% numa-vmstat.node1.nr_inactive_anon
48098 ± 2% +28.5% 61803 ± 12% numa-vmstat.node1.nr_inactive_file
2373 ± 0% -31.3% 1630 ± 25% numa-vmstat.node1.nr_mapped
2207 ± 1% -70.1% 659.50 ±133% numa-vmstat.node1.nr_shmem
5466 ± 4% +78.8% 9772 ± 24% numa-vmstat.node1.nr_slab_reclaimable
4392 ± 1% +20.0% 5272 ± 1% numa-vmstat.node1.nr_slab_unreclaimable
523487 ± 1% -20.1% 418465 ± 6% numa-vmstat.node1.numa_hit
519756 ± 1% -20.2% 414604 ± 6% numa-vmstat.node1.numa_local
90778 ± 0% -17.7% 74720 ± 11% numa-meminfo.node0.SReclaimable
27598 ± 1% -12.3% 24198 ± 1% numa-meminfo.node0.SUnreclaim
567.50 ± 16% +1075.3% 6670 ± 52% numa-meminfo.node0.Shmem
118377 ± 0% -16.4% 98919 ± 9% numa-meminfo.node0.Slab
253611 ± 1% +19.2% 302328 ± 11% numa-meminfo.node1.FilePages
201041 ± 2% +24.2% 249668 ± 13% numa-meminfo.node1.Inactive
8645 ± 0% -71.6% 2454 ±143% numa-meminfo.node1.Inactive(anon)
192396 ± 2% +28.5% 247214 ± 12% numa-meminfo.node1.Inactive(file)
2197 ± 15% +25.2% 2752 ± 7% numa-meminfo.node1.KernelStack
9494 ± 0% -31.3% 6524 ± 25% numa-meminfo.node1.Mapped
345480 ± 1% +20.3% 415555 ± 10% numa-meminfo.node1.MemUsed
21865 ± 4% +78.8% 39089 ± 24% numa-meminfo.node1.SReclaimable
17570 ± 1% +20.0% 21091 ± 1% numa-meminfo.node1.SUnreclaim
8830 ± 1% -70.1% 2640 ±133% numa-meminfo.node1.Shmem
39436 ± 2% +52.6% 60181 ± 16% numa-meminfo.node1.Slab
25609 ±155% -99.0% 247.00 ±173% latency_stats.avg.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
34613 ± 17% -93.2% 2366 ±100% latency_stats.avg.submit_bio_wait.blkdev_issue_flush.ext4_sync_file.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
62402 ± 14% -100.0% 0.00 ± -1% latency_stats.avg.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_create.vfs_create
85292 ± 16% -100.0% 0.00 ± -1% latency_stats.avg.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
233836 ± 9% -100.0% 0.00 ± -1% latency_stats.avg.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time
98586 ± 15% -100.0% 0.00 ± -1% latency_stats.avg.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_writepages.do_writepages.__filemap_fdatawrite_range
138727 ± 20% -99.7% 356.50 ± 34% latency_stats.avg.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 7289 ± 37% latency_stats.hits.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.add_dirent_to_buf.ext4_dx_add_entry.ext4_add_entry.ext4_add_nondir.ext4_create.vfs_create.path_openat
27954 ± 1% -86.6% 3734 ± 49% latency_stats.hits.ext4_mb_initialize_context.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file.vfs_fsync_range.do_fsync.SyS_fsync
290546 ± 16% -85.7% 41420 ± 36% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.__ext4_new_inode.ext4_create.vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
265287 ± 9% -85.9% 37329 ± 63% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_mb_mark_diskspace_used.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file
0.00 ± -1% +Inf% 16683 ± 61% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.add_dirent_to_buf.ext4_add_entry.ext4_add_nondir.ext4_create.vfs_create.path_openat.do_filp_open
0.00 ± -1% +Inf% 29279 ± 73% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.add_dirent_to_buf.ext4_dx_add_entry.ext4_add_entry.ext4_add_nondir.ext4_create.vfs_create.path_openat
63561 ±165% -99.6% 267.75 ±173% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
62202 ± 74% -95.4% 2841 ± 21% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 37841 ± 39% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_ext_tree_init.__ext4_new_inode.ext4_create.vfs_create.path_openat.do_filp_open.do_sys_open
279828 ± 21% -94.5% 15429 ± 97% latency_stats.max.ext4_mb_initialize_context.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file.vfs_fsync_range.do_fsync.SyS_fsync
176558 ± 21% -95.8% 7488 ±102% latency_stats.max.submit_bio_wait.blkdev_issue_flush.ext4_sync_file.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
381747 ± 21% -100.0% 0.00 ± -1% latency_stats.max.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_create.vfs_create
165796 ± 24% -100.0% 0.00 ± -1% latency_stats.max.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
406817 ± 14% -100.0% 0.00 ± -1% latency_stats.max.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time
165247 ± 40% -100.0% 0.00 ± -1% latency_stats.max.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_writepages.do_writepages.__filemap_fdatawrite_range
324551 ± 12% -91.2% 28575 ± 43% latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_create.vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
458237 ± 15% -99.5% 2140 ± 76% latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
234367 ± 47% -94.6% 12766 ± 80% latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file.vfs_fsync_range.do_fsync
0.00 ± -1% +Inf% 23646 ±123% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_mb_mark_diskspace_used.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_getblk.ext4_bread.ext4_append.do_split.ext4_dx_add_entry
0.00 ± -1% +Inf% 30398 ± 50% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_mb_mark_diskspace_used.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_getblk.ext4_bread.ext4_append.ext4_mkdir.vfs_mkdir
0.00 ± -1% +Inf% 56525 ±105% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_mb_mark_diskspace_used.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_getblk.ext4_bread.ext4_append.make_indexed_dir.ext4_add_entry
41228828 ± 7% -92.4% 3149423 ± 67% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_mb_mark_diskspace_used.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file
0.00 ± -1% +Inf% 1241212 ± 55% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.add_dirent_to_buf.ext4_add_entry.ext4_add_nondir.ext4_create.vfs_create.path_openat.do_filp_open
0.00 ± -1% +Inf% 2727180 ± 46% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.add_dirent_to_buf.ext4_dx_add_entry.ext4_add_entry.ext4_add_nondir.ext4_create.vfs_create.path_openat
0.00 ± -1% +Inf% 88155 ± 72% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_getblk.ext4_bread
200757 ±159% -99.5% 988.25 ±173% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
1755344 ± 9% -94.1% 103667 ± 53% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 4885655 ± 35% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_ext_tree_init.__ext4_new_inode.ext4_create.vfs_create.path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 35213 ± 40% latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_ext_tree_init.__ext4_new_inode.ext4_mkdir.vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
35021878 ± 12% -97.3% 959921 ± 59% latency_stats.sum.ext4_mb_initialize_context.ext4_mb_new_blocks.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file.vfs_fsync_range.do_fsync.SyS_fsync
2610110 ± 10% -99.6% 10320 ±115% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.ext4_sync_file.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
7233874 ± 35% -100.0% 0.00 ± -1% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_create.vfs_create
1159776 ± 50% -100.0% 0.00 ± -1% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
822013 ± 35% -100.0% 0.00 ± -1% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time
810979 ± 49% -100.0% 0.00 ± -1% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_writepages.do_writepages.__filemap_fdatawrite_range
3757907 ± 7% -99.8% 7957 ± 37% latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
1020128 ± 78% -94.6% 54940 ± 73% latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_writepages.do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.ext4_sync_file.vfs_fsync_range.do_fsync
5691 ± 8% -22.7% 4398 ± 14% sched_debug.cfs_rq:/.exec_clock.0
854.81 ± 10% +157.0% 2196 ± 27% sched_debug.cfs_rq:/.exec_clock.11
2946 ± 3% -16.0% 2475 ± 4% sched_debug.cfs_rq:/.exec_clock.4
2075 ± 6% -19.3% 1673 ± 10% sched_debug.cfs_rq:/.exec_clock.8
2.75 ± 30% +318.2% 11.50 ± 58% sched_debug.cfs_rq:/.load_avg.1
5.25 ± 75% +266.7% 19.25 ± 20% sched_debug.cfs_rq:/.load_avg.7
12.00 ± 31% -70.8% 3.50 ± 24% sched_debug.cfs_rq:/.load_avg.8
1.25 ± 34% +860.0% 12.00 ± 33% sched_debug.cfs_rq:/.load_avg.9
0.22 ± 37% +116.7% 0.47 ± 19% sched_debug.cfs_rq:/.load_avg.min
6649 ± 29% -31.8% 4537 ± 15% sched_debug.cfs_rq:/.min_vruntime.8
7953 ± 4% -12.7% 6940 ± 3% sched_debug.cfs_rq:/.min_vruntime.avg
19991 ± 6% -22.3% 15531 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
4514 ± 8% -25.9% 3343 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev
0.75 ±110% +433.3% 4.00 ± 30% sched_debug.cfs_rq:/.nr_spread_over.13
4514 ± 8% -25.9% 3343 ± 9% sched_debug.cfs_rq:/.spread0.stddev
22.00 ± 19% +104.5% 45.00 ± 19% sched_debug.cfs_rq:/.util_avg.2
361962 ± 3% -24.9% 271884 ± 0% sched_debug.cpu.clock.0
361962 ± 3% -24.9% 271883 ± 0% sched_debug.cpu.clock.1
361968 ± 3% -24.9% 271888 ± 0% sched_debug.cpu.clock.10
361969 ± 3% -24.9% 271884 ± 0% sched_debug.cpu.clock.11
361969 ± 3% -24.9% 271889 ± 0% sched_debug.cpu.clock.12
361963 ± 3% -24.9% 271887 ± 0% sched_debug.cpu.clock.13
361971 ± 3% -24.9% 271890 ± 0% sched_debug.cpu.clock.14
361966 ± 3% -24.9% 271893 ± 0% sched_debug.cpu.clock.15
361965 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock.2
361945 ± 3% -24.9% 271880 ± 0% sched_debug.cpu.clock.3
361968 ± 3% -24.9% 271885 ± 0% sched_debug.cpu.clock.4
361964 ± 3% -24.9% 271890 ± 0% sched_debug.cpu.clock.5
361971 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock.6
361967 ± 3% -24.9% 271890 ± 0% sched_debug.cpu.clock.7
361963 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock.8
361965 ± 3% -24.9% 271881 ± 0% sched_debug.cpu.clock.9
361965 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock.avg
361973 ± 3% -24.9% 271893 ± 0% sched_debug.cpu.clock.max
361913 ± 3% -24.9% 271854 ± 0% sched_debug.cpu.clock.min
15.52 ± 18% -27.6% 11.24 ± 25% sched_debug.cpu.clock.stddev
361962 ± 3% -24.9% 271884 ± 0% sched_debug.cpu.clock_task.0
361962 ± 3% -24.9% 271883 ± 0% sched_debug.cpu.clock_task.1
361968 ± 3% -24.9% 271888 ± 0% sched_debug.cpu.clock_task.10
361969 ± 3% -24.9% 271884 ± 0% sched_debug.cpu.clock_task.11
361969 ± 3% -24.9% 271889 ± 0% sched_debug.cpu.clock_task.12
361963 ± 3% -24.9% 271887 ± 0% sched_debug.cpu.clock_task.13
361971 ± 3% -24.9% 271890 ± 0% sched_debug.cpu.clock_task.14
361966 ± 3% -24.9% 271893 ± 0% sched_debug.cpu.clock_task.15
361965 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock_task.2
361945 ± 3% -24.9% 271880 ± 0% sched_debug.cpu.clock_task.3
361968 ± 3% -24.9% 271885 ± 0% sched_debug.cpu.clock_task.4
361964 ± 3% -24.9% 271890 ± 0% sched_debug.cpu.clock_task.5
361971 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock_task.6
361967 ± 3% -24.9% 271890 ± 0% sched_debug.cpu.clock_task.7
361963 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock_task.8
361965 ± 3% -24.9% 271881 ± 0% sched_debug.cpu.clock_task.9
361965 ± 3% -24.9% 271886 ± 0% sched_debug.cpu.clock_task.avg
361973 ± 3% -24.9% 271893 ± 0% sched_debug.cpu.clock_task.max
361913 ± 3% -24.9% 271854 ± 0% sched_debug.cpu.clock_task.min
15.52 ± 18% -27.6% 11.24 ± 25% sched_debug.cpu.clock_task.stddev
594.58 ± 12% -30.7% 412.29 ± 12% sched_debug.cpu.curr->pid.avg
8265 ± 6% -36.1% 5285 ± 12% sched_debug.cpu.curr->pid.max
2062 ± 7% -34.6% 1348 ± 11% sched_debug.cpu.curr->pid.stddev
25768 ± 4% -11.4% 22823 ± 7% sched_debug.cpu.nr_load_updates.6
7155 ± 5% -21.9% 5586 ± 14% sched_debug.cpu.nr_load_updates.stddev
7911 ± 9% +225.8% 25779 ± 60% sched_debug.cpu.nr_switches.11
61043 ± 5% -20.1% 48763 ± 23% sched_debug.cpu.nr_switches.14
11503 ± 41% +164.9% 30476 ± 57% sched_debug.cpu.nr_switches.15
7955 ± 3% +138.1% 18945 ± 39% sched_debug.cpu.nr_switches.9
7498 ± 6% +134.5% 17581 ± 40% sched_debug.cpu.nr_switches.min
61163 ± 7% -18.1% 50102 ± 9% sched_debug.cpu.nr_switches.stddev
-33543 ± -4% -41.2% -19708 ±-38% sched_debug.cpu.nr_uninterruptible.0
3853 ± 6% -52.0% 1849 ± 41% sched_debug.cpu.nr_uninterruptible.10
86.75 ± 59% +195.4% 256.25 ± 22% sched_debug.cpu.nr_uninterruptible.11
3849 ± 10% -48.7% 1974 ± 45% sched_debug.cpu.nr_uninterruptible.12
83.25 ± 66% +253.8% 294.50 ± 31% sched_debug.cpu.nr_uninterruptible.13
3930 ± 7% -48.4% 2029 ± 42% sched_debug.cpu.nr_uninterruptible.14
77.00 ± 35% +241.6% 263.00 ± 25% sched_debug.cpu.nr_uninterruptible.15
4437 ± 11% -55.8% 1959 ± 55% sched_debug.cpu.nr_uninterruptible.2
4509 ± 5% -51.1% 2206 ± 51% sched_debug.cpu.nr_uninterruptible.4
4507 ± 9% -50.6% 2226 ± 51% sched_debug.cpu.nr_uninterruptible.6
7142 ± 6% -29.1% 5065 ± 29% sched_debug.cpu.nr_uninterruptible.8
87.25 ± 42% +204.3% 265.50 ± 24% sched_debug.cpu.nr_uninterruptible.9
7146 ± 6% -29.1% 5065 ± 29% sched_debug.cpu.nr_uninterruptible.max
-33542 ± -4% -41.2% -19707 ±-38% sched_debug.cpu.nr_uninterruptible.min
8953 ± 4% -41.4% 5250 ± 38% sched_debug.cpu.nr_uninterruptible.stddev
7958 ± 9% +224.4% 25817 ± 60% sched_debug.cpu.sched_count.11
11549 ± 40% +169.2% 31095 ± 59% sched_debug.cpu.sched_count.15
8003 ± 3% +137.1% 18980 ± 39% sched_debug.cpu.sched_count.9
7638 ± 6% +130.6% 17616 ± 40% sched_debug.cpu.sched_count.min
3539 ± 9% +217.3% 11231 ± 61% sched_debug.cpu.sched_goidle.11
5137 ± 46% +159.0% 13303 ± 59% sched_debug.cpu.sched_goidle.15
3543 ± 4% +119.8% 7789 ± 37% sched_debug.cpu.sched_goidle.9
3327 ± 5% +112.9% 7083 ± 37% sched_debug.cpu.sched_goidle.min
28088 ± 7% -18.9% 22770 ± 7% sched_debug.cpu.sched_goidle.stddev
40517 ± 77% -70.7% 11888 ± 56% sched_debug.cpu.ttwu_count.15
119597 ± 3% -30.9% 82689 ± 19% sched_debug.cpu.ttwu_local.0
4054 ± 8% +44.6% 5862 ± 23% sched_debug.cpu.ttwu_local.1
5950 ± 3% -32.2% 4036 ± 21% sched_debug.cpu.ttwu_local.10
1259 ± 9% +154.6% 3206 ± 43% sched_debug.cpu.ttwu_local.13
1530 ± 8% +126.2% 3461 ± 40% sched_debug.cpu.ttwu_local.15
8444 ± 6% -25.1% 6328 ± 12% sched_debug.cpu.ttwu_local.2
3189 ± 15% +53.0% 4878 ± 31% sched_debug.cpu.ttwu_local.3
6155 ± 9% -20.3% 4905 ± 6% sched_debug.cpu.ttwu_local.4
6013 ± 12% -32.9% 4033 ± 8% sched_debug.cpu.ttwu_local.6
9433 ± 2% -19.1% 7632 ± 8% sched_debug.cpu.ttwu_local.8
1303 ± 12% +131.5% 3016 ± 38% sched_debug.cpu.ttwu_local.9
11588 ± 2% -18.7% 9421 ± 5% sched_debug.cpu.ttwu_local.avg
119620 ± 3% -30.9% 82707 ± 19% sched_debug.cpu.ttwu_local.max
1185 ± 12% +85.3% 2196 ± 26% sched_debug.cpu.ttwu_local.min
28011 ± 3% -32.1% 19019 ± 21% sched_debug.cpu.ttwu_local.stddev
361971 ± 3% -24.9% 271891 ± 0% sched_debug.cpu_clk
360052 ± 3% -24.9% 270234 ± 0% sched_debug.ktime
361971 ± 3% -24.9% 271891 ± 0% sched_debug.sched_clk


lkp-ne04: Nehalem-EP
Memory: 12G



turbostat._Busy

1.5 ++--------------------------------------------------------------------+
| O O |
1.4 ++ O |
1.3 ++ |
| O O O O |
1.2 ++ O O O O |
| O O O O O O
1.1 O+ O O O O O O |
| O O O |
1 ++ |
| |
0.9 ++ .*.. *. .*.. *.. |
0.8 *+.*.*.. .*..*. *.*.. .*. .. *. *.*.. .. *.*..*..*.*..*
| *..* *. *..* * |
0.7 ++--------------------------------------------------------------------+


uptime.idle

10500 ++------------------------------------------------------------------+
* *.. .*.. |
10000 ++ *.. .. * *.*.. *..*.. .*..*. .*..* *. |
9500 +++ + .* + .. + *.*..*.*. *. *..|
| * * *..* * *
9000 ++ |
8500 ++ |
| |
8000 ++ O O O O |
7500 O+ O O O O O O O O O |
| O O O O O O
7000 ++ O O O O |
6500 ++ O O |
| |
6000 ++-----------------------------------O------------------------------+

fsmark.time.voluntary_context_switches

500000 ++-----------------------------------------------------------------+
490000 ++ O O O O O O O |
O O O O O O O O O O |
480000 ++ O O O O O O O O O O
470000 ++ |
| |
460000 ++ |
450000 ++ |
440000 ++ |
| |
430000 ++ |
420000 ++ .*.. *.. *. |
*.. .*.. .*.*.. .*.*..*.*..*..* + .*..*.. .. *.. |
410000 ++ * *.*. *. * * *.* *.*..|
400000 ++-----------------------------------------------------------------*


fsmark.time.file_system_inputs

40000 ++------------------------------------------------------------------+
| O |
35000 ++ |
30000 ++ O O |
| O O O O O O O |
25000 O+ O O O O O O O O O O O O
| O O |
20000 ++ O O |
| |
15000 ++ |
10000 ++ |
| |
5000 ++ |
| |
0 *+-*-*--*-*--*--*-*--*--*-*--*-*--*--*-*--*-*--*--*-*--*--*-*--*-*--*


proc-vmstat.pgpgin

20000 ++------------------------------------------------------------------+
18000 ++ O |
| |
16000 ++ O O |
14000 ++ O O O O O O O |
O O O O O O O O O O O O O
12000 ++ O O |
10000 ++ O O |
8000 ++ |
| |
6000 ++ |
4000 ++ |
| |
2000 ++ |
0 *+-*-*--*-*--*--*-*--*--*-*--*-*--*--*-*--*-*--*--*-*--*--*-*--*-*--*


proc-vmstat.pgalloc_normal

1.35e+06 ++---------------------------------------------------------------+
1.3e+06 ++ * |
* *. *..* *. .. + |
1.25e+06 ++ *.. .. *.. + + .*..*. .. * *.. |
1.2e+06 +++ + .* *.*..*.*..* *..*.*..* * *.|
| * * *
1.15e+06 ++ |
1.1e+06 ++ |
1.05e+06 ++ O O O O |
O O O O O O O O O O O O O |
1e+06 ++ O O O
950000 ++ O O O O |
| O O |
900000 ++ O |
850000 ++---------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: fsmark
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
heartbeat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor: performance
default-watchdogs:
oom-killer:
watchdog:
commit: 1f2d779fed21806ffed84aa65617c6125f480e85
model: Nehalem-EP
memory: 12G
hdd_partitions: "/dev/disk/by-id/ata-ST3500514NS_9WJ03EBA-part3"
swap_partitions: "/dev/disk/by-id/ata-ST3120026AS_5MS07HA2-part2"
rootfs_partition: "/dev/disk/by-id/ata-ST3500514NS_9WJ03EBA-part1"
category: benchmark
iterations: 1x
nr_threads: 32t
disk: 1HDD
fs: ext4
fs2:
fsmark:
filesize: 9B
test_size: 400M
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
queue: bisect
testbox: lkp-ne04
tbox_group: lkp-ne04
kconfig: x86_64-rhel
enqueue_time: 2016-02-28 17:23:12.481767277 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: 68aa8f695a0e8531c991807de120bb153d0c18de
user: lkp
head_commit: 1736b6892f32b4fc71f44c86f026bd5729e37e1c
base_commit: 81f70ba233d5f660e1ea5fe23260ee323af5d53a
branch: linux-devel/devel-hourly-2016022721
result_root: "/result/fsmark/performance-1x-32t-1HDD-ext4-9B-400M-fsyncBeforeClose-16d-256fpd/lkp-ne04/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1f2d779fed21806ffed84aa65617c6125f480e85/0"
job_file: "/lkp/scheduled/lkp-ne04/bisect_fsmark-performance-1x-32t-1HDD-ext4-9B-400M-fsyncBeforeClose-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-rhel-1f2d779fed21806ffed84aa65617c6125f480e85-20160228-126629-2r1e6c-0.yaml"
nr_cpu: "$(nproc)"
max_uptime: 2384.62
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-ne04/bisect_fsmark-performance-1x-32t-1HDD-ext4-9B-400M-fsyncBeforeClose-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-rhel-1f2d779fed21806ffed84aa65617c6125f480e85-20160228-126629-2r1e6c-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2016022721
- commit=1f2d779fed21806ffed84aa65617c6125f480e85
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/1f2d779fed21806ffed84aa65617c6125f480e85/vmlinuz-4.5.0-rc2-00026-g1f2d779
- max_uptime=2384
- RESULT_ROOT=/result/fsmark/performance-1x-32t-1HDD-ext4-9B-400M-fsyncBeforeClose-16d-256fpd/lkp-ne04/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1f2d779fed21806ffed84aa65617c6125f480e85/0
- LKP_SERVER=inn
- |2-


earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal

rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/1f2d779fed21806ffed84aa65617c6125f480e85/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs2.cgz,/lkp/benchmarks/fsmark.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/1f2d779fed21806ffed84aa65617c6125f480e85/linux-headers.cgz"
repeat_to: 2
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/1f2d779fed21806ffed84aa65617c6125f480e85/vmlinuz-4.5.0-rc2-00026-g1f2d779"
dequeue_time: 2016-02-28 17:39:51.598790914 +08:00
job_state: finished
loadavg: 30.66 26.71 13.96 2/234 13815
start_time: '1456652435'
end_time: '1456652948'
version: "/lkp/lkp/.src-20160226-194908"
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-02-28 17:40:32 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-02-28 17:40:32 mkfs -t ext4 -q -F /dev/sda3
2016-02-28 17:40:34 mount -t ext4 /dev/sda3 /fs/sda3
2016-02-28 17:40:35 ./fs_mark -d /fs/sda3/1 -d /fs/sda3/2 -d /fs/sda3/3 -d /fs/sda3/4 -d /fs/sda3/5 -d /fs/sda3/6 -d /fs/sda3/7 -d /fs/sda3/8 -d /fs/sda3/9 -d /fs/sda3/10 -d /fs/sda3/11 -d /fs/sda3/12 -d /fs/sda3/13 -d /fs/sda3/14 -d /fs/sda3/15 -d /fs/sda3/16 -d /fs/sda3/17 -d /fs/sda3/18 -d /fs/sda3/19 -d /fs/sda3/20 -d /fs/sda3/21 -d /fs/sda3/22 -d /fs/sda3/23 -d /fs/sda3/24 -d /fs/sda3/25 -d /fs/sda3/26 -d /fs/sda3/27 -d /fs/sda3/28 -d /fs/sda3/29 -d /fs/sda3/30 -d /fs/sda3/31 -d /fs/sda3/32 -D 16 -N 256 -n 3200 -L 1 -S 1 -s 9
\
 
 \ /
  Last update: 2016-02-29 10:21    [W:0.061 / U:0.788 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site