lkml.org 
[lkml]   [2018]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    Subject[PATCH 3.16 193/328] iw_cxgb4: atomically flush the qp
    3.16.62-rc1 review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Steve Wise <swise@opengridcomputing.com>

    commit bc52e9ca74b9a395897bb640c6671b2cbf716032 upstream.

    __flush_qp() has a race condition where during the flush operation,
    the qp lock is released allowing another thread to possibly post a WR,
    which corrupts the queue state, possibly causing crashes. The lock was
    released to preserve the cq/qp locking hierarchy of cq first, then qp.
    However releasing the qp lock is not necessary; both RQ and SQ CQ locks
    can be acquired first, followed by the qp lock, and then the RQ and SQ
    flushing can be done w/o unlocking.

    Signed-off-by: Steve Wise <swise@opengridcomputing.com>
    Signed-off-by: Doug Ledford <dledford@redhat.com>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    ---
    drivers/infiniband/hw/cxgb4/qp.c | 19 +++++++++++--------
    1 file changed, 11 insertions(+), 8 deletions(-)

    --- a/drivers/infiniband/hw/cxgb4/qp.c
    +++ b/drivers/infiniband/hw/cxgb4/qp.c
    @@ -1071,31 +1071,34 @@ static void __flush_qp(struct c4iw_qp *q

    PDBG("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp);

    - /* locking hierarchy: cq lock first, then qp lock. */
    + /* locking hierarchy: cqs lock first, then qp lock. */
    spin_lock_irqsave(&rchp->lock, flag);
    + if (schp != rchp)
    + spin_lock(&schp->lock);
    spin_lock(&qhp->lock);

    if (qhp->wq.flushed) {
    spin_unlock(&qhp->lock);
    + if (schp != rchp)
    + spin_unlock(&schp->lock);
    spin_unlock_irqrestore(&rchp->lock, flag);
    return;
    }
    qhp->wq.flushed = 1;
    + t4_set_wq_in_error(&qhp->wq);

    c4iw_flush_hw_cq(rchp, qhp);
    c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count);
    rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count);
    - spin_unlock(&qhp->lock);
    - spin_unlock_irqrestore(&rchp->lock, flag);

    - /* locking hierarchy: cq lock first, then qp lock. */
    - spin_lock_irqsave(&schp->lock, flag);
    - spin_lock(&qhp->lock);
    if (schp != rchp)
    c4iw_flush_hw_cq(schp, qhp);
    sq_flushed = c4iw_flush_sq(qhp);
    +
    spin_unlock(&qhp->lock);
    - spin_unlock_irqrestore(&schp->lock, flag);
    + if (schp != rchp)
    + spin_unlock(&schp->lock);
    + spin_unlock_irqrestore(&rchp->lock, flag);

    if (schp == rchp) {
    if (t4_clear_cq_armed(&rchp->cq) &&
    @@ -1129,8 +1132,8 @@ static void flush_qp(struct c4iw_qp *qhp
    rchp = to_c4iw_cq(qhp->ibqp.recv_cq);
    schp = to_c4iw_cq(qhp->ibqp.send_cq);

    - t4_set_wq_in_error(&qhp->wq);
    if (qhp->ibqp.uobject) {
    + t4_set_wq_in_error(&qhp->wq);
    t4_set_cq_in_error(&rchp->cq);
    spin_lock_irqsave(&rchp->comp_handler_lock, flag);
    (*rchp->ibcq.comp_handler)(&rchp->ibcq, rchp->ibcq.cq_context);
    \
     
     \ /
      Last update: 2018-12-09 23:18    [W:4.755 / U:0.156 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site