lkml.org 
[lkml]   [2019]   [Aug]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.2 123/131] IB/mlx5: Use direct mkey destroy command upon UMR unreg failure
    Date
    From: Yishai Hadas <yishaih@mellanox.com>

    commit afd1417404fba6dbfa6c0a8e5763bd348da682e4 upstream.

    Use a direct firmware command to destroy the mkey in case the unreg UMR
    operation has failed.

    This prevents a case that a mkey will leak out from the cache post a
    failure to be destroyed by a UMR WR.

    In case the MR cache limit didn't reach a call to add another entry to the
    cache instead of the destroyed one is issued.

    In addition, replaced a warn message to WARN_ON() as this flow is fatal
    and can't happen unless some bug around.

    Link: https://lore.kernel.org/r/20190723065733.4899-4-leon@kernel.org
    Cc: <stable@vger.kernel.org> # 4.10
    Fixes: 49780d42dfc9 ("IB/mlx5: Expose MR cache for mlx5_ib")
    Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
    Reviewed-by: Artemy Kovalyov <artemyko@mellanox.com>
    Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
    Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
    Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    drivers/infiniband/hw/mlx5/mr.c | 13 ++++++++-----
    1 file changed, 8 insertions(+), 5 deletions(-)

    --- a/drivers/infiniband/hw/mlx5/mr.c
    +++ b/drivers/infiniband/hw/mlx5/mr.c
    @@ -545,13 +545,16 @@ void mlx5_mr_cache_free(struct mlx5_ib_d
    return;

    c = order2idx(dev, mr->order);
    - if (c < 0 || c >= MAX_MR_CACHE_ENTRIES) {
    - mlx5_ib_warn(dev, "order %d, cache index %d\n", mr->order, c);
    - return;
    - }
    + WARN_ON(c < 0 || c >= MAX_MR_CACHE_ENTRIES);

    - if (unreg_umr(dev, mr))
    + if (unreg_umr(dev, mr)) {
    + mr->allocated_from_cache = false;
    + destroy_mkey(dev, mr);
    + ent = &cache->ent[c];
    + if (ent->cur < ent->limit)
    + queue_work(cache->wq, &ent->work);
    return;
    + }

    ent = &cache->ent[c];
    spin_lock_irq(&ent->lock);

    \
     
     \ /
      Last update: 2019-08-05 15:56    [W:4.278 / U:0.356 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site