Messages in this thread | | | From | Vincent Guittot <> | Date | Fri, 10 Nov 2017 09:29:13 +0100 | Subject | Re: [PATCH] sched/fair: Consider RT/IRQ pressure in capacity_spare_wake |
| |
On 9 November 2017 at 19:52, Joel Fernandes <joelaf@google.com> wrote: > capacity_spare_wake in the slow path influences choice of idlest groups, > as we search for groups with maximum spare capacity. In scenarios where > RT pressure is high, a sub optimal group can be chosen and hurt > performance of the task being woken up. > > Several tests with results are included below to show improvements with > this change. > > 1) Hackbench on Pixel 2 Android device (4x4 ARM64 Octa core)
"4x4 ARM64 Octa core" is confusing . At least for me, 4x4 means 16 cores :-)
> ------------------------------------------------------------ > Here we have RT activity running on big CPU cluster induced with rt-app, > and running hackbench in parallel. The RT tasks are bound to 4 CPUs on > the big cluster (cpu 4,5,6,7) and have 100ms periodicity with > runtime=20ms sleep=80ms. > > Hackbench shows big benefit (30%) improvement when number of tasks is 8 > and 32: Note: data is completion time in seconds (lower is better). > Number of loops for 8 and 16 tasks is 50000, and for 32 tasks its 20000. > +--------+-----+-------+-------------------+---------------------------+ > | groups | fds | tasks | Without Patch | With Patch | > +--------+-----+-------+---------+---------+-----------------+---------+ > | | | | Mean | Stdev | Mean | Stdev | > | | | +-------------------+-----------------+---------+ > | 1 | 8 | 8 | 1.0534 | 0.13722 | 0.7293 (+30.7%) | 0.02653 | > | 2 | 8 | 16 | 1.6219 | 0.16631 | 1.6391 (-1%) | 0.24001 | > | 4 | 8 | 32 | 1.2538 | 0.13086 | 1.1080 (+11.6%) | 0.16201 | > +--------+-----+-------+---------+---------+-----------------+---------+
Out of curiosity, do you know why you don't see any improvement for 16 tasks but only for 8 and 32 tasks ?
> > 2) Rohit ran barrier.c test (details below) with following improvements: > ------------------------------------------------------------------------ > This was Rohit's original use case for a patch he posted at [1] however > from his recent tests he showed my patch can replace his slow path > changes [1] and there's no need to selectively scan/skip CPUs in > find_idlest_group_cpu in the slow path to get the improvement he sees. > > barrier.c (open_mp code) as a micro-benchmark. It does a number of > iterations and barrier sync at the end of each for loop. > > Here barrier,c is running in along with ping on CPU 0 and 1 as: > 'ping -l 10000 -q -s 10 -f hostX' > > barrier.c can be found at: > http://www.spinics.net/lists/kernel/msg2506955.html > > Following are the results for the iterations per second with this > micro-benchmark (higher is better), on a 44 core, 2 socket 88 Threads > Intel x86 machine: > +--------+------------------+---------------------------+ > |Threads | Without patch | With patch | > | | | | > +--------+--------+---------+-----------------+---------+ > | | Mean | Std Dev | Mean | Std Dev | > +--------+--------+---------+-----------------+---------+ > |1 | 539.36 | 60.16 | 572.54 (+6.15%) | 40.95 | > |2 | 481.01 | 19.32 | 530.64 (+10.32%)| 56.16 | > |4 | 474.78 | 22.28 | 479.46 (+0.99%) | 18.89 | > |8 | 450.06 | 24.91 | 447.82 (-0.50%) | 12.36 | > |16 | 436.99 | 22.57 | 441.88 (+1.12%) | 7.39 | > |32 | 388.28 | 55.59 | 429.4 (+10.59%)| 31.14 | > |64 | 314.62 | 6.33 | 311.81 (-0.89%) | 11.99 | > +--------+--------+---------+-----------------+---------+ > > 3) ping+hackbench test on bare-metal sever (Rohit ran this test) > ---------------------------------------------------------------- > Here hackbench is running in threaded mode along > with, running ping on CPU 0 and 1 as: > 'ping -l 10000 -q -s 10 -f hostX' > > This test is running on 2 socket, 20 core and 40 threads Intel x86 > machine: > Number of loops is 10000 and runtime is in seconds (Lower is better). > > +--------------+-----------------+--------------------------+ > |Task Groups | Without patch | With patch | > | +-------+---------+----------------+---------+ > |(Groups of 40)| Mean | Std Dev | Mean | Std Dev | > +--------------+-------+---------+----------------+---------+ > |1 | 0.851 | 0.007 | 0.828 (+2.77%)| 0.032 | > |2 | 1.083 | 0.203 | 1.087 (-0.37%)| 0.246 | > |4 | 1.601 | 0.051 | 1.611 (-0.62%)| 0.055 | > |8 | 2.837 | 0.060 | 2.827 (+0.35%)| 0.031 | > |16 | 5.139 | 0.133 | 5.107 (+0.63%)| 0.085 | > |25 | 7.569 | 0.142 | 7.503 (+0.88%)| 0.143 | > +--------------+-------+---------+----------------+---------+ > > [1] https://patchwork.kernel.org/patch/9991635/ > > Matt Fleming also ran cyclictest and several different hackbench tests > on his test machines to santiy-check that the patch doesn't harm any > of his usecases. > > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> > Cc: Vincent Guittot <vincent.guittot@linaro.org> > Cc: Morten Ramussen <morten.rasmussen@arm.com> > Cc: Brendan Jackman <brendan.jackman@arm.com> > Tested-by: Rohit Jain <rohit.k.jain@oracle.com> > Tested-by: Matt Fleming <matt@codeblueprint.co.uk> > Signed-off-by: Joel Fernandes <joelaf@google.com> > --- > kernel/sched/fair.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 56f343b8e749..ba9609407cb9 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5724,7 +5724,7 @@ static int cpu_util_wake(int cpu, struct task_struct *p); > > static unsigned long capacity_spare_wake(int cpu, struct task_struct *p) > { > - return capacity_orig_of(cpu) - cpu_util_wake(cpu, p); > + return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);
Make sense
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> } > > /* > -- > 2.15.0.448.gf294e3d99a-goog >
| |