lkml.org 
[lkml]   [2016]   [Mar]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.4 249/342] dmaengine: dw: disable BLOCK IRQs for non-cyclic xfer
    Date
    4.4-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

    commit ee1cdcdae59563535485a5f56ee72c894ab7d7ad upstream.

    The commit 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
    re-enabled BLOCK interrupts with regard to make cyclic transfers work. However,
    this change becomes a regression for non-cyclic transfers as interrupt counters
    under stress test had been grown enormously (approximately per 4-5 bytes in the
    UART loop back test).

    Taking into consideration above enable BLOCK interrupts if and only if channel
    is programmed to perform cyclic transfer.

    Fixes: 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
    Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Acked-by: Mans Rullgard <mans@mansr.com>
    Tested-by: Mans Rullgard <mans@mansr.com>
    Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
    Signed-off-by: Vinod Koul <vinod.koul@intel.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    drivers/dma/dw/core.c | 15 ++++++++++-----
    1 file changed, 10 insertions(+), 5 deletions(-)

    --- a/drivers/dma/dw/core.c
    +++ b/drivers/dma/dw/core.c
    @@ -156,7 +156,6 @@ static void dwc_initialize(struct dw_dma

    /* Enable interrupts */
    channel_set_bit(dw, MASK.XFER, dwc->mask);
    - channel_set_bit(dw, MASK.BLOCK, dwc->mask);
    channel_set_bit(dw, MASK.ERROR, dwc->mask);

    dwc->initialized = true;
    @@ -588,6 +587,9 @@ static void dwc_handle_cyclic(struct dw_

    spin_unlock_irqrestore(&dwc->lock, flags);
    }
    +
    + /* Re-enable interrupts */
    + channel_set_bit(dw, MASK.BLOCK, dwc->mask);
    }

    /* ------------------------------------------------------------------------- */
    @@ -618,11 +620,8 @@ static void dw_dma_tasklet(unsigned long
    dwc_scan_descriptors(dw, dwc);
    }

    - /*
    - * Re-enable interrupts.
    - */
    + /* Re-enable interrupts */
    channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
    - channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
    channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
    }

    @@ -1256,6 +1255,7 @@ static void dwc_free_chan_resources(stru
    int dw_dma_cyclic_start(struct dma_chan *chan)
    {
    struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
    + struct dw_dma *dw = to_dw_dma(chan->device);
    unsigned long flags;

    if (!test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) {
    @@ -1264,7 +1264,12 @@ int dw_dma_cyclic_start(struct dma_chan
    }

    spin_lock_irqsave(&dwc->lock, flags);
    +
    + /* Enable interrupts to perform cyclic transfer */
    + channel_set_bit(dw, MASK.BLOCK, dwc->mask);
    +
    dwc_dostart(dwc, dwc->cdesc->desc[0]);
    +
    spin_unlock_irqrestore(&dwc->lock, flags);

    return 0;
    \
     
     \ /
      Last update: 2016-03-02 02:21    [W:8.254 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site