lkml.org 
[lkml]   [2020]   [Dec]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch 21/30] net/mlx4: Use effective interrupt affinity
From
Date


On 12/10/2020 9:25 PM, Thomas Gleixner wrote:
> Using the interrupt affinity mask for checking locality is not really
> working well on architectures which support effective affinity masks.
>
> The affinity mask is either the system wide default or set by user space,
> but the architecture can or even must reduce the mask to the effective set,
> which means that checking the affinity mask itself does not really tell
> about the actual target CPUs.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tariq Toukan <tariqt@nvidia.com>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: netdev@vger.kernel.org
> Cc: linux-rdma@vger.kernel.org
> ---
> drivers/net/ethernet/mellanox/mlx4/en_cq.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
> @@ -117,7 +117,7 @@ int mlx4_en_activate_cq(struct mlx4_en_p
> assigned_eq = true;
> }
> irq = mlx4_eq_get_irq(mdev->dev, cq->vector);
> - cq->aff_mask = irq_get_affinity_mask(irq);
> + cq->aff_mask = irq_get_effective_affinity_mask(irq);
> } else {
> /* For TX we use the same irq per
> ring we assigned for the RX */
>

Reviewed-by: Tariq Toukan <tariqt@nvidia.com>

Thanks.

\
 
 \ /
  Last update: 2020-12-13 12:37    [W:0.377 / U:1.760 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site