lkml.org 
[lkml]   [2019]   [Nov]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] lib: optimize cpumask_local_spread()
On Tue 05-11-19 17:33:59, Andrew Morton wrote:
> On Tue, 5 Nov 2019 08:01:41 +0100 Michal Hocko <mhocko@kernel.org> wrote:
>
> > On Mon 04-11-19 18:27:48, Shaokun Zhang wrote:
> > > From: yuqi jin <jinyuqi@huawei.com>
> > >
> > > In the multi-processor and NUMA system, I/O device may have many numa
> > > nodes belonging to multiple cpus. When we get a local numa, it is
> > > better to find the node closest to the local numa node, instead
> > > of choosing any online cpu immediately.
> > >
> > > For the current code, it only considers the local NUMA node and it
> > > doesn't compute the distances between different NUMA nodes for the
> > > non-local NUMA nodes. Let's optimize it and find the nearest node
> > > through NUMA distance. The performance will be better if it return
> > > the nearest node than the random node.
> >
> > Numbers please
>
> The changelog had
>
> : When Parameter Server workload is tested using NIC device on Huawei
> : Kunpeng 920 SoC:
> : Without the patch, the performance is 22W QPS;
> : Added this patch, the performance become better and it is 26W QPS.

Maybe it is just me but this doesn't really tell me a lot. What is
Parameter Server workload? What do I do to replicate those numbers? Is
this really specific to the Kunpeng 920 server? What is the usual
variance of the performance numbers?

> > [...]
> > > +/**
> > > + * cpumask_local_spread - select the i'th cpu with local numa cpu's first
> > > + * @i: index number
> > > + * @node: local numa_node
> > > + *
> > > + * This function selects an online CPU according to a numa aware policy;
> > > + * local cpus are returned first, followed by the nearest non-local ones,
> > > + * then it wraps around.
> > > + *
> > > + * It's not very efficient, but useful for setup.
> > > + */
> > > +unsigned int cpumask_local_spread(unsigned int i, int node)
> > > +{
> > > + int node_dist[MAX_NUMNODES] = {0};
> > > + bool used[MAX_NUMNODES] = {0};
> >
> > Ugh. This might be a lot of stack space. Some distro kernels use large
> > NODE_SHIFT (e.g 10 so this would be 4kB of stack space just for the
> > node_dist).
>
> Yes, that's big. From a quick peek I suspect we could get by using an
> array of unsigned shorts here but that might be fragile over time even
> if it works now?

Whatever data type we use it will be still quite large to be on the
stack.

> Perhaps we could make it a statically allocated array and protect the
> entire thing with a spin_lock_irqsave()? It's not a frequently called
> function.

This is what I was suggesting in previous review feedback.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2019-11-06 08:18    [W:0.742 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site