lkml.org 
[lkml]   [2013]   [Sep]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[245/251] workqueue: cond_resched() after processing each work item
    3.6.11.9-rc1 stable review patch.
    If anyone has any objections, please let me know.

    ------------------

    From: Tejun Heo <tj@kernel.org>

    [ Upstream commit b22ce2785d97423846206cceec4efee0c4afd980 ]

    If !PREEMPT, a kworker running work items back to back can hog CPU.
    This becomes dangerous when a self-requeueing work item which is
    waiting for something to happen races against stop_machine. Such
    self-requeueing work item would requeue itself indefinitely hogging
    the kworker and CPU it's running on while stop_machine would wait for
    that CPU to enter stop_machine while preventing anything else from
    happening on all other CPUs. The two would deadlock.

    Jamie Liu reports that this deadlock scenario exists around
    scsi_requeue_run_queue() and libata port multiplier support, where one
    port may exclude command processing from other ports. With the right
    timing, scsi_requeue_run_queue() can end up requeueing itself trying
    to execute an IO which is asked to be retried while another device has
    an exclusive access, which in turn can't make forward progress due to
    stop_machine.

    Fix it by invoking cond_resched() after executing each work item.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Reported-by: Jamie Liu <jamieliu@google.com>
    Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
    References: http://thread.gmane.org/gmane.linux.kernel/1552567
    Cc: stable@vger.kernel.org
    --
    kernel/workqueue.c | 9 +++++++++
    1 file changed, 9 insertions(+)
    ---
    kernel/workqueue.c | 9 +++++++++
    1 file changed, 9 insertions(+)

    diff --git a/kernel/workqueue.c b/kernel/workqueue.c
    index 0352a81..ce44f31 100644
    --- a/kernel/workqueue.c
    +++ b/kernel/workqueue.c
    @@ -2105,6 +2105,15 @@ __acquires(&gcwq->lock)
    dump_stack();
    }

    + /*
    + * The following prevents a kworker from hogging CPU on !PREEMPT
    + * kernels, where a requeueing work item waiting for something to
    + * happen could deadlock with stop_machine as such work item could
    + * indefinitely requeue itself while all other CPUs are trapped in
    + * stop_machine.
    + */
    + cond_resched();
    +
    spin_lock_irq(&gcwq->lock);

    /* clear cpu intensive status */
    --
    1.7.10.4



    \
     
     \ /
      Last update: 2013-09-11 08:21    [W:4.070 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site