lkml.org 
[lkml]   [2017]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 4.9 91/93] net/mlx5: E-Switch, Re-enable RoCE on mode change only after FDB destroy
Date
4.9-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Or Gerlitz <ogerlitz@mellanox.com>


[ Upstream commit 5bae8c031053c69b4aa74b7f1ba15d4ec8426208 ]

We must re-enable RoCE on the e-switch management port (PF) only after destroying
the FDB in its switchdev/offloaded mode. Otherwise, when encapsulation is supported,
this re-enablement will fail.

Also, it's more natural and symmetric to disable RoCE on the PF before we create
the FDB under switchdev mode, so do that as well and revert if getting into error
during the mode change later.

Fixes: 9da34cd34e85 ('net/mlx5: Disable RoCE on the e-switch management [..]')
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c | 29 ++++++++-----
1 file changed, 18 insertions(+), 11 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -651,9 +651,14 @@ int esw_offloads_init(struct mlx5_eswitc
int vport;
int err;

+ /* disable PF RoCE so missed packets don't go through RoCE steering */
+ mlx5_dev_list_lock();
+ mlx5_remove_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
+ mlx5_dev_list_unlock();
+
err = esw_create_offloads_fdb_table(esw, nvports);
if (err)
- return err;
+ goto create_fdb_err;

err = esw_create_offloads_table(esw);
if (err)
@@ -673,11 +678,6 @@ int esw_offloads_init(struct mlx5_eswitc
goto err_reps;
}

- /* disable PF RoCE so missed packets don't go through RoCE steering */
- mlx5_dev_list_lock();
- mlx5_remove_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
- mlx5_dev_list_unlock();
-
return 0;

err_reps:
@@ -694,6 +694,13 @@ create_fg_err:

create_ft_err:
esw_destroy_offloads_fdb_table(esw);
+
+create_fdb_err:
+ /* enable back PF RoCE */
+ mlx5_dev_list_lock();
+ mlx5_add_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
+ mlx5_dev_list_unlock();
+
return err;
}

@@ -701,11 +708,6 @@ static int esw_offloads_stop(struct mlx5
{
int err, err1, num_vfs = esw->dev->priv.sriov.num_vfs;

- /* enable back PF RoCE */
- mlx5_dev_list_lock();
- mlx5_add_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
- mlx5_dev_list_unlock();
-
mlx5_eswitch_disable_sriov(esw);
err = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_LEGACY);
if (err) {
@@ -715,6 +717,11 @@ static int esw_offloads_stop(struct mlx5
esw_warn(esw->dev, "Failed setting eswitch back to offloads, err %d\n", err);
}

+ /* enable back PF RoCE */
+ mlx5_dev_list_lock();
+ mlx5_add_dev_by_protocol(esw->dev, MLX5_INTERFACE_PROTOCOL_IB);
+ mlx5_dev_list_unlock();
+
return err;
}


\
 
 \ /
  Last update: 2017-08-09 20:20    [W:0.738 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site