lkml.org 
[lkml]   [2018]   [Jun]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [lkp-robot] [fs] 3deb642f0d: will-it-scale.per_process_ops -8.8% regression
On 06/27, Christoph Hellwig wrote:
>On Tue, Jun 26, 2018 at 02:03:38PM +0800, Ye Xiaolong wrote:
>> Hi,
>>
>> On 06/22, Christoph Hellwig wrote:
>> >Hi Xiaolong,
>> >
>> >can you retest this workload on the following branch:
>> >
>> > git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>> >
>> >Gitweb:
>> >
>> > http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
>>
>> Here is the comparison for commit 3deb642f0d and commit 8fbedc1 ("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer") in remove-get-poll-head branch.
>
>Especially the boot time ones and others look like they have additional
>changes.
>
>Can you compare the baseline of my tree, which is
>894b8c00 ("Merge tag 'for_v4.18-rc2' of
>git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs") against 8fbedc1
>(("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer") ?

Update the result:

testcase/path_params/tbox_group/run: will-it-scale/poll2-performance/lkp-sb03

894b8c000ae6c106 8fbedc19c94fd25a2b9b327015
---------------- --------------------------
%stddev change %stddev
\ | \
404611 ± 4% 5% 424608 will-it-scale.per_process_ops
1489 ± 21% 28% 1899 ± 18% will-it-scale.time.voluntary_context_switches
45828560 46155690 will-it-scale.workload
2337 2342 will-it-scale.time.system_time
806 806 will-it-scale.time.percent_of_cpu_this_job_got
310 310 will-it-scale.time.elapsed_time
310 310 will-it-scale.time.elapsed_time.max
4096 4096 will-it-scale.time.page_size
233917 233862 will-it-scale.per_thread_ops
17196 17179 will-it-scale.time.minor_page_faults
9901 9862 will-it-scale.time.maximum_resident_set_size
14705 ± 3% 14397 ± 4% will-it-scale.time.involuntary_context_switches
167 163 will-it-scale.time.user_time
0.66 ± 25% -17% 0.54 will-it-scale.scalability
120508 ± 15% -7% 112098 ± 5% interrupts.CAL:Function_call_interrupts
1670 ± 3% 10% 1845 ± 3% vmstat.system.cs
32707 32635 vmstat.system.in
121 122 turbostat.CorWatt
149 150 turbostat.PkgWatt
1573 1573 turbostat.Avg_MHz
17.54 ± 19% 17.77 ± 19% boot-time.kernel_boot
824 ± 12% 834 ± 12% boot-time.idle
27.45 ± 12% 27.69 ± 12% boot-time.boot
16.96 ± 21% 16.93 ± 21% boot-time.dhcp
1489 ± 21% 28% 1899 ± 18% time.voluntary_context_switches
2337 2342 time.system_time
806 806 time.percent_of_cpu_this_job_got
310 310 time.elapsed_time
310 310 time.elapsed_time.max
4096 4096 time.page_size
17196 17179 time.minor_page_faults
9901 9862 time.maximum_resident_set_size
14705 ± 3% 14397 ± 4% time.involuntary_context_switches
167 163 time.user_time
18320 6% 19506 ± 8% proc-vmstat.nr_slab_unreclaimable
1518 ± 7% 1558 ± 10% proc-vmstat.numa_hint_faults
1387 ± 8% 1421 ± 9% proc-vmstat.numa_hint_faults_local
1873 ± 5% 1917 ± 8% proc-vmstat.numa_pte_updates
19987 20005 proc-vmstat.nr_anon_pages
8464 8471 proc-vmstat.nr_kernel_stack
309815 310062 proc-vmstat.nr_file_pages
50828 50828 proc-vmstat.nr_free_cma
16065590 16064831 proc-vmstat.nr_free_pages
3194669 3194517 proc-vmstat.nr_dirty_threshold
1595384 1595308 proc-vmstat.nr_dirty_background_threshold
798886 797937 proc-vmstat.pgfault
6510 6499 proc-vmstat.nr_mapped
659089 657491 proc-vmstat.numa_local
665458 663786 proc-vmstat.numa_hit
1037 1033 proc-vmstat.nr_page_table_pages
669923 665906 proc-vmstat.pgfree
676982 672385 proc-vmstat.pgalloc_normal
6368 6294 proc-vmstat.numa_other
13013 -7% 12152 ± 11% proc-vmstat.nr_slab_reclaimable
51213164 ± 18% 23% 63014695 ± 25% perf-stat.node-loads
22096136 ± 28% 20% 26619357 ± 35% perf-stat.node-load-misses
2.079e+08 ± 9% 12% 2.323e+08 ± 11% perf-stat.cache-misses
515039 ± 3% 10% 568299 ± 3% perf-stat.context-switches
3.283e+08 ± 22% 10% 3.622e+08 ± 5% perf-stat.iTLB-loads

Thanks,
Xiaolong

\
 
 \ /
  Last update: 2018-06-28 02:43    [W:0.072 / U:0.544 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site