lkml.org 
[lkml]   [2020]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v2 11/11] powerpc/smp: Optimize update_coregroup_mask
    Date
    All threads of a SMT4/SMT8 core can either be part of CPU's coregroup
    mask or outside the coregroup. Use this relation to reduce the
    number of iterations needed to find all the CPUs that share the same
    coregroup

    Use a temporary mask to iterate through the CPUs that may share
    coregroup mask. Also instead of setting one CPU at a time into
    cpu_coregroup_mask, copy the SMT4/SMT8/submask at one shot.

    Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
    Cc: LKML <linux-kernel@vger.kernel.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Anton Blanchard <anton@ozlabs.org>
    Cc: Oliver O'Halloran <oohall@gmail.com>
    Cc: Nathan Lynch <nathanl@linux.ibm.com>
    Cc: Michael Neuling <mikey@neuling.org>
    Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
    Cc: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
    Cc: Ingo Molnar <mingo@kernel.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Valentin Schneider <valentin.schneider@arm.com>
    Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
    ---
    arch/powerpc/kernel/smp.c | 30 ++++++++++++++++++++++--------
    1 file changed, 22 insertions(+), 8 deletions(-)

    diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
    index b48ae4e306d3..bbaea93dc558 100644
    --- a/arch/powerpc/kernel/smp.c
    +++ b/arch/powerpc/kernel/smp.c
    @@ -1339,19 +1339,33 @@ static inline void add_cpu_to_smallcore_masks(int cpu)

    static void update_coregroup_mask(int cpu)
    {
    - int first_thread = cpu_first_thread_sibling(cpu);
    + struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
    + cpumask_var_t mask;
    int coregroup_id = cpu_to_coregroup_id(cpu);
    int i;

    - cpumask_set_cpu(cpu, cpu_coregroup_mask(cpu));
    - for_each_cpu_and(i, cpu_online_mask, cpu_cpu_mask(cpu)) {
    - int fcpu = cpu_first_thread_sibling(i);
    + alloc_cpumask_var_node(&mask, GFP_KERNEL, cpu_to_node(cpu));
    + cpumask_and(mask, cpu_online_mask, cpu_cpu_mask(cpu));
    +
    + if (shared_caches)
    + submask_fn = cpu_l2_cache_mask;
    +
    + /* Update coregroup mask with all the CPUs that are part of submask */
    + or_cpumasks_related(cpu, cpu, submask_fn, cpu_coregroup_mask);
    +
    + /* Skip all CPUs already part of coregroup mask */
    + cpumask_andnot(mask, mask, cpu_coregroup_mask(cpu));

    - if (fcpu == first_thread)
    - set_cpus_related(cpu, i, cpu_coregroup_mask);
    - else if (coregroup_id == cpu_to_coregroup_id(i))
    - set_cpus_related(cpu, i, cpu_coregroup_mask);
    + for_each_cpu(i, mask) {
    + /* Skip all CPUs not part of this coregroup */
    + if (coregroup_id == cpu_to_coregroup_id(i)) {
    + or_cpumasks_related(cpu, i, submask_fn, cpu_coregroup_mask);
    + cpumask_andnot(mask, mask, submask_fn(i));
    + } else {
    + cpumask_andnot(mask, mask, cpu_coregroup_mask(i));
    + }
    }
    + free_cpumask_var(mask);
    }

    static void add_cpu_to_masks(int cpu)
    --
    2.17.1
    \
     
     \ /
      Last update: 2020-09-21 11:58    [W:3.155 / U:0.124 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site