lkml.org 
[lkml]   [2015]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[lkp] [tmpfs] afa2db2fb6: -14.5% aim9.creat-clo.ops_per_sec
Date
FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit afa2db2fb6f15f860069de94a1257db57589fe95 ("tmpfs: truncate prealloc blocks past i_size")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/testtime/test:
lkp-wsx02/aim9/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/creat-clo

commit:
c435a390574d012f8d30074135d8fcc6f480b484
afa2db2fb6f15f860069de94a1257db57589fe95

c435a390574d012f afa2db2fb6f15f860069de94a1
---------------- --------------------------
%stddev %change %stddev
\ | \
563108 ± 0% -14.5% 481585 ± 6% aim9.creat-clo.ops_per_sec
13485 ± 9% -17.2% 11162 ± 8% numa-meminfo.node0.SReclaimable
9.21 ± 4% -11.7% 8.13 ± 1% time.user_time
2.04 ± 10% -19.6% 1.64 ± 14% turbostat.CPU%c1
11667682 ± 96% -96.0% 463268 ±104% cpuidle.C1E-NHM.time
2401 ± 3% -38.8% 1470 ± 27% cpuidle.C3-NHM.usage
2.25 ± 48% +166.7% 6.00 ± 20% numa-numastat.node2.other_node
4.75 ± 68% +126.3% 10.75 ± 34% numa-numastat.node3.other_node
3370 ± 9% -17.2% 2790 ± 8% numa-vmstat.node0.nr_slab_reclaimable
15.00 ±101% +338.3% 65.75 ± 69% numa-vmstat.node1.nr_dirtied
14.33 ±108% +357.0% 65.50 ± 68% numa-vmstat.node1.nr_written
43359 ± 0% -50.6% 21399 ± 58% numa-vmstat.node2.numa_other
5522042 ± 0% -11.8% 4871759 ± 5% proc-vmstat.numa_hit
5522030 ± 0% -11.8% 4871736 ± 5% proc-vmstat.numa_local
10381338 ± 0% -16.1% 8713670 ± 5% proc-vmstat.pgalloc_normal
10403821 ± 0% -12.5% 9099427 ± 5% proc-vmstat.pgfree
1101 ± 5% -15.9% 926.25 ± 12% slabinfo.blkdev_ioc.active_objs
1101 ± 5% -15.9% 926.25 ± 12% slabinfo.blkdev_ioc.num_objs
1058 ± 3% -12.1% 930.75 ± 8% slabinfo.file_lock_ctx.active_objs
1058 ± 3% -12.1% 930.75 ± 8% slabinfo.file_lock_ctx.num_objs
872.38 ± 56% -46.5% 467.10 ± 70% sched_debug.cfs_rq[11]:/.exec_clock
530.50 ± 22% +225.1% 1724 ± 52% sched_debug.cfs_rq[13]:/.avg->runnable_avg_sum
11.00 ± 23% +234.1% 36.75 ± 54% sched_debug.cfs_rq[13]:/.tg_runnable_contrib
675.18 ± 30% +492.9% 4003 ± 61% sched_debug.cfs_rq[1]:/.exec_clock
3045 ± 37% +420.7% 15858 ± 73% sched_debug.cfs_rq[1]:/.min_vruntime
5240 ± 48% -56.8% 2264 ± 35% sched_debug.cfs_rq[22]:/.min_vruntime
5424 ± 93% -93.7% 339.50 ± 70% sched_debug.cfs_rq[23]:/.avg->runnable_avg_sum
117.00 ± 94% -94.4% 6.50 ± 79% sched_debug.cfs_rq[23]:/.tg_runnable_contrib
337.21 ± 15% -40.1% 201.92 ± 41% sched_debug.cfs_rq[24]:/.exec_clock
199.07 ± 78% +241.7% 680.17 ± 50% sched_debug.cfs_rq[25]:/.exec_clock
367.50 ± 12% -37.2% 230.75 ± 24% sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
7.00 ± 17% -39.3% 4.25 ± 30% sched_debug.cfs_rq[27]:/.tg_runnable_contrib
326.96 ± 15% -42.6% 187.64 ± 47% sched_debug.cfs_rq[28]:/.exec_clock
200.71 ± 88% +1505.4% 3222 ± 75% sched_debug.cfs_rq[29]:/.exec_clock
3240 ± 20% +72.0% 5574 ± 23% sched_debug.cfs_rq[31]:/.min_vruntime
97.47 ± 42% +891.3% 966.27 ± 53% sched_debug.cfs_rq[37]:/.exec_clock
1403 ± 55% +246.3% 4858 ± 53% sched_debug.cfs_rq[37]:/.min_vruntime
1461 ± 50% +143.7% 3562 ± 52% sched_debug.cfs_rq[41]:/.min_vruntime
184.00 ± 46% +671.9% 1420 ± 57% sched_debug.cfs_rq[42]:/.avg->runnable_avg_sum
3.25 ± 66% +823.1% 30.00 ± 59% sched_debug.cfs_rq[42]:/.tg_runnable_contrib
69.67 ± 57% +310.2% 285.75 ± 60% sched_debug.cfs_rq[46]:/.blocked_load_avg
69.67 ± 57% +310.2% 285.75 ± 60% sched_debug.cfs_rq[46]:/.tg_load_contrib
107.61 ± 51% +155.0% 274.41 ± 13% sched_debug.cfs_rq[49]:/.exec_clock
3332 ± 40% -85.4% 487.59 ± 87% sched_debug.cfs_rq[4]:/.exec_clock
16.00 ±104% +1359.4% 233.50 ± 81% sched_debug.cfs_rq[53]:/.blocked_load_avg
16.00 ±104% +1360.9% 233.75 ± 81% sched_debug.cfs_rq[53]:/.tg_load_contrib
2502 ± 21% +74.1% 4357 ± 22% sched_debug.cfs_rq[5]:/.min_vruntime
308.22 ± 17% -53.7% 142.65 ± 64% sched_debug.cfs_rq[60]:/.exec_clock
91.55 ± 65% +530.7% 577.43 ± 93% sched_debug.cfs_rq[61]:/.exec_clock
1023 ± 55% +205.9% 3130 ± 47% sched_debug.cfs_rq[61]:/.min_vruntime
10369 ± 2% -14.2% 8892 ± 6% sched_debug.cfs_rq[63]:/.tg_load_avg
2143 ± 6% -11.1% 1905 ± 7% sched_debug.cfs_rq[64]:/.tg->runnable_avg
10383 ± 2% -15.9% 8727 ± 4% sched_debug.cfs_rq[64]:/.tg_load_avg
76765 ± 94% -98.9% 872.14 ± 62% sched_debug.cfs_rq[65]:/.exec_clock
2142 ± 6% -11.1% 1905 ± 7% sched_debug.cfs_rq[65]:/.tg->runnable_avg
10306 ± 3% -16.6% 8596 ± 6% sched_debug.cfs_rq[65]:/.tg_load_avg
2144 ± 6% -10.9% 1912 ± 7% sched_debug.cfs_rq[66]:/.tg->runnable_avg
10312 ± 3% -16.6% 8599 ± 6% sched_debug.cfs_rq[66]:/.tg_load_avg
2151 ± 6% -11.1% 1913 ± 7% sched_debug.cfs_rq[67]:/.tg->runnable_avg
10302 ± 3% -16.8% 8568 ± 7% sched_debug.cfs_rq[67]:/.tg_load_avg
2150 ± 6% -10.8% 1917 ± 7% sched_debug.cfs_rq[68]:/.tg->runnable_avg
10242 ± 3% -16.9% 8516 ± 7% sched_debug.cfs_rq[68]:/.tg_load_avg
2152 ± 6% -11.2% 1911 ± 6% sched_debug.cfs_rq[69]:/.tg->runnable_avg
10201 ± 3% -17.4% 8430 ± 7% sched_debug.cfs_rq[69]:/.tg_load_avg
2154 ± 6% -11.3% 1910 ± 6% sched_debug.cfs_rq[70]:/.tg->runnable_avg
10132 ± 4% -17.3% 8379 ± 7% sched_debug.cfs_rq[70]:/.tg_load_avg
2159 ± 5% -11.3% 1914 ± 6% sched_debug.cfs_rq[71]:/.tg->runnable_avg
10119 ± 4% -16.9% 8411 ± 7% sched_debug.cfs_rq[71]:/.tg_load_avg
251.79 ± 15% -37.9% 156.44 ± 37% sched_debug.cfs_rq[72]:/.exec_clock
2161 ± 5% -11.2% 1919 ± 6% sched_debug.cfs_rq[72]:/.tg->runnable_avg
10119 ± 4% -16.7% 8429 ± 7% sched_debug.cfs_rq[72]:/.tg_load_avg
2123 ± 48% +76.5% 3748 ± 22% sched_debug.cfs_rq[73]:/.min_vruntime
2164 ± 5% -11.5% 1916 ± 6% sched_debug.cfs_rq[73]:/.tg->runnable_avg
10167 ± 4% -17.5% 8389 ± 8% sched_debug.cfs_rq[73]:/.tg_load_avg
2816 ± 62% -60.7% 1106 ± 47% sched_debug.cfs_rq[74]:/.min_vruntime
2169 ± 5% -11.5% 1921 ± 6% sched_debug.cfs_rq[74]:/.tg->runnable_avg
10166 ± 3% -17.5% 8388 ± 8% sched_debug.cfs_rq[74]:/.tg_load_avg
2167 ± 6% -11.5% 1918 ± 6% sched_debug.cfs_rq[75]:/.tg->runnable_avg
10141 ± 3% -18.1% 8304 ± 7% sched_debug.cfs_rq[75]:/.tg_load_avg
2165 ± 6% -11.6% 1915 ± 6% sched_debug.cfs_rq[76]:/.tg->runnable_avg
10115 ± 3% -18.3% 8261 ± 7% sched_debug.cfs_rq[76]:/.tg_load_avg
164.34 ± 26% +61.6% 265.63 ± 31% sched_debug.cfs_rq[77]:/.exec_clock
1944 ± 19% +92.2% 3736 ± 44% sched_debug.cfs_rq[77]:/.min_vruntime
2165 ± 6% -11.5% 1917 ± 7% sched_debug.cfs_rq[77]:/.tg->runnable_avg
9935 ± 2% -17.0% 8243 ± 7% sched_debug.cfs_rq[77]:/.tg_load_avg
2169 ± 6% -11.5% 1920 ± 6% sched_debug.cfs_rq[78]:/.tg->runnable_avg
9924 ± 2% -16.6% 8276 ± 7% sched_debug.cfs_rq[78]:/.tg_load_avg
2170 ± 6% -11.3% 1924 ± 6% sched_debug.cfs_rq[79]:/.tg->runnable_avg
9901 ± 3% -16.2% 8301 ± 7% sched_debug.cfs_rq[79]:/.tg_load_avg
3130 ± 24% +84.8% 5784 ± 22% sched_debug.cfs_rq[7]:/.min_vruntime
54.00 ±155% +502.8% 325.50 ± 71% sched_debug.cfs_rq[8]:/.blocked_load_avg
0.75 ±110% +233.3% 2.50 ± 20% sched_debug.cfs_rq[8]:/.nr_spread_over
54.00 ±155% +510.6% 329.75 ± 70% sched_debug.cfs_rq[8]:/.tg_load_contrib
463.50 ± 17% +284.5% 1782 ± 23% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
9.25 ± 15% +313.5% 38.25 ± 25% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
10937 ± 12% +33.6% 14607 ± 12% sched_debug.cpu#1.nr_load_updates
7725 ± 6% +24.0% 9575 ± 10% sched_debug.cpu#13.nr_load_updates
1854 ± 67% +752.3% 15802 ± 63% sched_debug.cpu#13.nr_switches
2061 ± 54% +672.2% 15918 ± 64% sched_debug.cpu#13.sched_count
872.75 ± 67% +785.0% 7723 ± 63% sched_debug.cpu#13.sched_goidle
277.25 ± 34% +315.5% 1152 ± 51% sched_debug.cpu#13.ttwu_local
4484 ±114% +270.1% 16600 ± 96% sched_debug.cpu#21.nr_switches
2158 ±115% +280.1% 8205 ± 97% sched_debug.cpu#21.sched_goidle
7863 ± 9% +20.8% 9497 ± 13% sched_debug.cpu#25.nr_load_updates
7848 ± 15% +59.2% 12495 ± 24% sched_debug.cpu#29.nr_load_updates
3109 ±103% +326.7% 13267 ± 75% sched_debug.cpu#29.nr_switches
3510 ± 85% +288.3% 13631 ± 71% sched_debug.cpu#29.sched_count
1502 ±105% +280.4% 5714 ± 75% sched_debug.cpu#29.sched_goidle
1473 ± 97% +293.8% 5803 ± 54% sched_debug.cpu#29.ttwu_count
708.25 ±119% +352.6% 3205 ± 65% sched_debug.cpu#29.ttwu_local
2741 ± 40% -50.6% 1353 ± 35% sched_debug.cpu#32.nr_switches
2747 ± 40% -50.5% 1358 ± 35% sched_debug.cpu#32.sched_count
1285 ± 43% -53.4% 598.50 ± 34% sched_debug.cpu#32.sched_goidle
6713 ± 2% +10.3% 7406 ± 2% sched_debug.cpu#37.nr_load_updates
701.00 ± 54% +589.1% 4830 ± 52% sched_debug.cpu#37.nr_switches
707.50 ± 54% +583.7% 4837 ± 52% sched_debug.cpu#37.sched_count
306.50 ± 55% +650.9% 2301 ± 55% sched_debug.cpu#37.sched_goidle
292.25 ± 64% +743.1% 2464 ±112% sched_debug.cpu#37.ttwu_count
178.00 ± 61% +125.3% 401.00 ± 4% sched_debug.cpu#37.ttwu_local
17407 ± 65% -65.6% 5986 ± 43% sched_debug.cpu#4.nr_switches
406.25 ± 81% +1106.8% 4902 ± 98% sched_debug.cpu#41.ttwu_count
179.25 ± 57% +119.2% 393.00 ± 11% sched_debug.cpu#41.ttwu_local
3.50 ± 14% -35.7% 2.25 ± 19% sched_debug.cpu#42.nr_uninterruptible
2593 ± 74% -55.7% 1148 ± 32% sched_debug.cpu#47.nr_switches
2599 ± 74% -55.6% 1153 ± 32% sched_debug.cpu#47.sched_count
766.00 ± 53% +551.0% 4986 ± 76% sched_debug.cpu#49.nr_switches
344.00 ± 52% +603.2% 2419 ± 78% sched_debug.cpu#49.sched_goidle
693.75 ±133% +671.1% 5349 ± 96% sched_debug.cpu#49.ttwu_count
8119 ± 7% +23.0% 9984 ± 16% sched_debug.cpu#5.nr_load_updates
2417 ± 52% +516.7% 14908 ± 68% sched_debug.cpu#5.nr_switches
2712 ± 40% +463.7% 15290 ± 64% sched_debug.cpu#5.sched_count
1116 ± 55% +557.1% 7336 ± 69% sched_debug.cpu#5.sched_goidle
714.75 ± 53% +124.5% 1604 ± 19% sched_debug.cpu#57.nr_switches
720.00 ± 52% +123.6% 1609 ± 19% sched_debug.cpu#57.sched_count
313.75 ± 53% +132.7% 730.00 ± 20% sched_debug.cpu#57.sched_goidle
289.75 ± 80% +463.1% 1631 ± 86% sched_debug.cpu#57.ttwu_count
4164 ± 83% -80.4% 815.25 ± 49% sched_debug.cpu#60.nr_switches
2008 ± 86% -82.0% 362.50 ± 49% sched_debug.cpu#60.sched_goidle
800.25 ± 44% +59.8% 1279 ± 9% sched_debug.cpu#61.nr_switches
807.00 ± 43% +59.1% 1284 ± 9% sched_debug.cpu#61.sched_count
1338 ±138% +556.8% 8791 ± 31% sched_debug.cpu#61.ttwu_count
167.25 ± 60% +112.6% 355.50 ± 5% sched_debug.cpu#61.ttwu_local
-0.50 ±-300% -450.0% 1.75 ± 24% sched_debug.cpu#63.nr_uninterruptible
339.75 ± 8% -12.3% 298.00 ± 8% sched_debug.cpu#63.ttwu_local
5420 ± 77% +239.4% 18395 ± 39% sched_debug.cpu#65.nr_switches
6364 ± 57% +193.7% 18690 ± 39% sched_debug.cpu#65.sched_count
2557 ± 83% +256.1% 9106 ± 39% sched_debug.cpu#65.sched_goidle
978.50 ± 9% +37.9% 1349 ± 19% sched_debug.cpu#68.ttwu_count
735.75 ± 50% +117.2% 1597 ± 32% sched_debug.cpu#73.nr_switches
741.50 ± 50% +116.1% 1602 ± 32% sched_debug.cpu#73.sched_count
300.50 ± 55% +140.0% 721.25 ± 36% sched_debug.cpu#73.sched_goidle
214.75 ± 54% +65.1% 354.50 ± 5% sched_debug.cpu#73.ttwu_local
9.75 ± 37% -71.8% 2.75 ±136% sched_debug.cpu#77.nr_uninterruptible
960.50 ±117% +508.8% 5848 ± 91% sched_debug.cpu#77.ttwu_count
1309 ± 14% +54.1% 2018 ± 29% sched_debug.cpu#8.ttwu_count
388.00 ± 6% +104.0% 791.50 ± 45% sched_debug.cpu#8.ttwu_local
2319 ± 56% +210.3% 7198 ± 39% sched_debug.cpu#9.nr_switches
2514 ± 47% +197.1% 7472 ± 33% sched_debug.cpu#9.sched_count
1064 ± 57% +189.9% 3085 ± 32% sched_debug.cpu#9.sched_goidle


lkp-wsx02: Westmere-EX
Memory: 128G




aim9.creat-clo.ops_per_sec

600000 ++-------------------*---*---*-------------------------------------+
*.*.*.*.*.* *.*.* * * *..*.*.*.*.*.*.*.*.*.*. .*.*.*.*.*.*
500000 ++ : : O O O O O O O O O O O O O. O |
O O O : O O O O O O O |
| O O O: O : O O |
400000 ++ : : |
| : : |
300000 ++ : : |
| : : |
200000 ++ : : |
| : : |
| :: |
100000 ++ : |
| : |
0 ++-----------*-----------------------------------------------------+

[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: aim9
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor: performance
default-watchdogs:
oom-killer:
watchdog:
commit: 64291f7db5bd8150a74ad2036f1037e6a0428df2
model: Westmere-EX
memory: 128G
nr_cpu: 80
nr_hdd_partitions: 0
hdd_partitions:
swap_partitions:
rootfs_partition:
category: benchmark
aim9:
testtime: 300s
test: creat-clo
queue: cyclic
testbox: lkp-wsx02
tbox_group: lkp-wsx02
kconfig: x86_64-rhel
enqueue_time: 2015-08-31 20:14:07.364176710 +08:00
id: d4542fecc14a8c3b48163f5795d811193722efa8
user: lkp
compiler: gcc-4.9
head_commit: 2d11c675e2c328a1763d4fbad7b6684879f8102a
base_commit: 64291f7db5bd8150a74ad2036f1037e6a0428df2
branch: linux-devel/devel-hourly-2015083105
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/vmlinuz-4.2.0"
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/aim9/performance-300s-creat-clo/lkp-wsx02/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/0"
job_file: "/lkp/scheduled/lkp-wsx02/cyclic_aim9-performance-300s-creat-clo-x86_64-rhel-CYCLIC_BASE-64291f7db5bd8150a74ad2036f1037e6a0428df2-20150831-63051-117sqqs-0.yaml"
dequeue_time: 2015-08-31 21:20:40.743213111 +08:00
max_uptime: 1655.44
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-wsx02/cyclic_aim9-performance-300s-creat-clo-x86_64-rhel-CYCLIC_BASE-64291f7db5bd8150a74ad2036f1037e6a0428df2-20150831-63051-117sqqs-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2015083105
- commit=64291f7db5bd8150a74ad2036f1037e6a0428df2
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/vmlinuz-4.2.0
- max_uptime=1655
- RESULT_ROOT=/result/aim9/performance-300s-creat-clo/lkp-wsx02/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/0
- LKP_SERVER=inn
- |2-


earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal

rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/aim9-x86_64.cgz"
job_state: finished
loadavg: 0.93 0.70 0.33 1/763 10976
start_time: '1441027321'
end_time: '1441027621'
version: "/lkp/lkp/.src-20150831-174112"
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu48/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu49/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu50/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu51/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu52/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu53/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu54/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu55/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu56/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu57/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu58/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu59/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu60/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu61/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu62/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu63/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu64/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu65/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu66/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu67/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu68/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu69/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu70/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu71/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu72/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu73/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu74/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu75/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu76/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu77/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu78/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu79/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
\
 
 \ /
  Last update: 2015-10-12 09:01    [W:0.041 / U:0.812 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site