lkml.org 
[lkml]   [2012]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[ 175/175] serial: sh-sci: fix a race of DMA submit_tx on transfer
3.3-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Yoshii Takashi <takashi.yoshii.zj@renesas.com>

commit 49d4bcaddca977fffdea8b0b71f6e5da96dac78e upstream.

When DMA is enabled, sh-sci transfer begins with
uart_start()
sci_start_tx()
if (cookie_tx < 0) schedule_work()
Then, starts DMA when wq scheduled, -- (A)
process_one_work()
work_fn_rx()
cookie_tx = desc->submit_tx()
And finishes when DMA transfer ends, -- (B)
sci_dma_tx_complete()
async_tx_ack()
cookie_tx = -EINVAL
(possible another schedule_work())

This A to B sequence is not reentrant, since controlling variables
(for example, cookie_tx above) are not queues nor lists. So, they
must be invoked as A B A B..., otherwise results in kernel crash.

To ensure the sequence, sci_start_tx() seems to test if cookie_tx < 0
(represents "not used") to call schedule_work().
But cookie_tx will not be set (to a cookie, also means "used") until
in the middle of work queue scheduled function work_fn_tx().

This gap between the test and set allows the breakage of the sequence
under the very frequently call of uart_start().
Another gap between async_tx_ack() and another schedule_work() results
in the same issue, too.

This patch introduces a new condition "cookie_tx == 0" just to mark
it is "busy" and assign it within spin-locked region to fill the gaps.

Signed-off-by: Takashi Yoshii <takashi.yoshii.zj@renesas.com>
Reviewed-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
drivers/tty/serial/sh-sci.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)

--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1229,17 +1229,20 @@ static void sci_dma_tx_complete(void *ar
port->icount.tx += sg_dma_len(&s->sg_tx);

async_tx_ack(s->desc_tx);
- s->cookie_tx = -EINVAL;
s->desc_tx = NULL;

if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(port);

if (!uart_circ_empty(xmit)) {
+ s->cookie_tx = 0;
schedule_work(&s->work_tx);
- } else if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) {
- u16 ctrl = sci_in(port, SCSCR);
- sci_out(port, SCSCR, ctrl & ~SCSCR_TIE);
+ } else {
+ s->cookie_tx = -EINVAL;
+ if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) {
+ u16 ctrl = sci_in(port, SCSCR);
+ sci_out(port, SCSCR, ctrl & ~SCSCR_TIE);
+ }
}

spin_unlock_irqrestore(&port->lock, flags);
@@ -1501,8 +1504,10 @@ static void sci_start_tx(struct uart_por
}

if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) &&
- s->cookie_tx < 0)
+ s->cookie_tx < 0) {
+ s->cookie_tx = 0;
schedule_work(&s->work_tx);
+ }
#endif

if (!s->chan_tx || port->type == PORT_SCIFA || port->type == PORT_SCIFB) {



\
 
 \ /
  Last update: 2012-03-30 23:37    [W:0.598 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site