Messages in this thread | | | Subject | Re: power increase issue on light load | From | "Alex,Shi" <> | Date | Wed, 29 Jun 2011 11:22:44 +0800 |
| |
> > Looking at the schedstat data Alex posted: > - Distribution of load balances across cores looks about the same. > - Load balancer does more idle balances on 3.0-rc4 as compared to > 2.6.39 on SMT and NUMA domains. Busy and newidle balances are a mixed > bag. > - I see far fewer affine wakeups on 3.0-rc4 as compared to 2.6.39. > About half as many affine wakeups on SMT and about a quarter as many > on NUMA. > > I'm investigating the impact of the load resolution patchset on > effective load and wake affine calculations. This seems to be the most > obvious difference from the schedstat data. > > Alex -- I have a couple of questions about your test setup and results. > - What is the impact on throughput of these benchmarks?
both on bltk-office and light load specpower, 10%/20%/30% load, the throughput almost have no change on my NHM-EP server and t410 laptop. > - Would it be possible to get a "perf sched" trace on these two kernels?
I will run the testing again and give you data later. but I didn't find more useful data in 'perf record -e sched*'. > - I'm assuming the three sched domains are SMT, MC and NUMA. Is that > right? Do you have any powersavings balance or special sched domain > flags enabled?
Yes, and the sched_mc_power_savings and sched_smt_power_savings were both set. the NHM-EP domain like below:
CPU15 attaching sched-domain: domain 0: span 7,15 level SIBLING groups: 15 (cpu_power = 589) 7 (cpu_power = 589) domain 1: span 1,3,5,7,9,11,13,15 level MC groups: 7,15 (cpu_power = 1178) 1,9 (cpu_power = 1178) 3,11 (cpu_power = 1178) 5,13 (cpu_power = 1178) domain 2: span 0-15 level NODE groups: 1,3,5,7,9,11,13,15 (cpu_power = 4712) 0,2,4,6,8,10,12,14 (cpu_power = 4712)
> - Are you using group scheduling? If so, what does your setup look like?
I enabled the FAIR group default. But I have tried to disable it. the problem is same. so, it isn't related to group. > > -Thanks, > Nikhil
| |