lkml.org 
[lkml]   [2016]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 06/13] irq: add a helper spread an affinity mask for MSI/MSI-X vectors
From
Date
On 06/14/2016 11:54 PM, Guilherme G. Piccoli wrote:
> On 06/14/2016 04:58 PM, Christoph Hellwig wrote:
> I take this opportunity to ask you something, since I'm working in a
> related code in a specific driver - sorry in advance if my question is
> silly or if I misunderstood your code.
>
> The function irq_create_affinity_mask() below deals with the case in
> which we have nr_vecs < num_online_cpus(); in this case, wouldn't be a
> good idea to trying distribute the vecs among cores?
>
> Example: if we have 128 online cpus, 8 per core (meaning 16 cores) and
> 64 vecs, I guess would be ideal to distribute 4 vecs _per core_, leaving
> 4 CPUs in each core without vecs.

Hello Christoph and Guilherme,

I would also like to see irq_create_affinity_mask() modified such that
it implements Guilherme's algorithm. I think blk-mq requests should be
processed by a CPU core from the NUMA node from which the request has
been submitted. With the proposed algorithm, if the number of MSI-X
vectors is less than or equal to the number of CPU cores of a single
node, all interrupt vectors will be assigned to the first NUMA node.

Bart.

\
 
 \ /
  Last update: 2016-06-15 11:21    [W:0.126 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site