lkml.org 
[lkml]   [2009]   [May]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: INFO: possible circular locking dependency at cleanup_workqueue_thread
    From
    2009/5/19 Johannes Berg <johannes@sipsolutions.net>:
    > On Tue, 2009-05-19 at 14:00 +0200, Oleg Nesterov wrote:
    >
    >> > I'm not familiar enough with the code -- but what are we really trying
    >> > to do in CPU_POST_DEAD? It seems to me that at that time things must
    >> > already be off the CPU, so ...?
    >>
    >> Yes, this cpu is dead, we should do cleanup_workqueue_thread() to kill
    >> cwq->thread.
    >>
    >> > On the other hand that calls
    >> > flush_cpu_workqueue() so it seems it would actually wait for the work to
    >> > be executed on some other CPU, within the CPU_POST_DEAD notification?
    >>
    >> Yes. Because we can't just kill cwq->thread, we can have the pending
    >> work_structs so we have to flush.
    >>
    >> Why can't we move these works to another CPU? We can, but this doesn't
    >> really help. Because in any case we should at least wait for
    >> cwq->current_work to complete.
    >>
    >> Why do we use CPU_POST_DEAD, and not (say) CPU_DEAD to flush/kill ?
    >> Because work->func() can sleep in get_online_cpus(), we can't flush
    >> until we drop cpu_hotplug.lock.
    >
    > Right. But exactly this happens in the hibernate case -- the hibernate
    > code calls kernel/cpu.c:disable_nonboot_cpus() which calls _cpu_down()
    > which calls raw_notifier_call_chain(&cpu_chain, CPU_POST_DEAD... Sadly,
    > it does so while holding the cpu_add_remove_lock, which is happens to
    > have the dependencies outlined in the original email...
    >
    > The same happens in cpu_down() (without leading _) which you can trigger
    > from sysfs by manually removing the CPU, so it's not hibernate specific.
    >
    > Anyway, you can have a deadlock like this:
    >
    > CPU 3                   CPU 2                           CPU 1
    >                                                        suspend/hibernate
    >                        something:
    >                        rtnl_lock()                     device_pm_lock()
    >                                                        -> mutex_lock(&dpm_list_mtx)
    >
    >                        mutex_lock(&dpm_list_mtx)

    Would you give a explaination why mutex_lock(&dpm_list_mtx) runs in CPU2
    and depends on rtnl_lock?
    Thanks!

    >
    > linkwatch_work
    >  -> rtnl_lock()
    >                                                        disable_nonboot_cpus()
    >                                                        -> flush CPU 3 workqueue
    >
    > johannes
    >
    >



    --
    Lei Ming
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2009-05-20 05:39    [W:4.646 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site