lkml.org 
[lkml]   [2007]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/2] workqueue: debug flushing deadlocks with lockdep
    On 07/05, Johannes Berg wrote:
    >
    > @@ -257,7 +261,9 @@ static void run_workqueue(struct cpu_wor
    >
    > BUG_ON(get_wq_data(work) != cwq);
    > work_clear_pending(work);
    > + lock_acquire(&cwq->wq->lockdep_map, 0, 0, 0, 2, _THIS_IP_);
    > f(work);
    > + lock_release(&cwq->wq->lockdep_map, 1, _THIS_IP_);

    Johannes, my apologies. You were worried about recursion, and you were right,
    sorry!

    Currently it is allowed that work->func() does flush_workqueue() on its own
    workqueue. So we have

    run_workqueue()
    work->func()
    flush_workqueue()
    run_workqueue()

    All but work->func() take wq->lockdep_map, I guess check_deadlock() won't be
    happy.

    In your initial patch, wq->lockdep_map was taken in flush_cpu_workqueue() when
    cwq->thread != current, but this is still not enough. Because we take the same
    lock when flush_workqueue() does flush_cpu_workqueue() on another CPU.

    run_workqueue() is easy, it can check cwq->run_depth == 1 before lock/unlock.

    Anybody sees a simple soultion? Perhaps, some clever trick with LOCKDEP ?


    OTOH. Perhaps we can can forbid such a behaviour? Andrew, do you know any
    good example of "keventd trying to flush its own queue" ?

    In any case, I think both patches are great, thanks for doing this!

    Oleg.

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-07-05 16:33    [W:0.063 / U:31.256 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site