Messages in this thread | | | Subject | Re: INFO: possible circular locking dependency at cleanup_workqueue_thread | From | Johannes Berg <> | Date | Tue, 19 May 2009 17:33:23 +0200 |
| |
On Tue, 2009-05-19 at 14:00 +0200, Oleg Nesterov wrote:
> > I'm not familiar enough with the code -- but what are we really trying > > to do in CPU_POST_DEAD? It seems to me that at that time things must > > already be off the CPU, so ...? > > Yes, this cpu is dead, we should do cleanup_workqueue_thread() to kill > cwq->thread. > > > On the other hand that calls > > flush_cpu_workqueue() so it seems it would actually wait for the work to > > be executed on some other CPU, within the CPU_POST_DEAD notification? > > Yes. Because we can't just kill cwq->thread, we can have the pending > work_structs so we have to flush. > > Why can't we move these works to another CPU? We can, but this doesn't > really help. Because in any case we should at least wait for > cwq->current_work to complete. > > Why do we use CPU_POST_DEAD, and not (say) CPU_DEAD to flush/kill ? > Because work->func() can sleep in get_online_cpus(), we can't flush > until we drop cpu_hotplug.lock.
Right. But exactly this happens in the hibernate case -- the hibernate code calls kernel/cpu.c:disable_nonboot_cpus() which calls _cpu_down() which calls raw_notifier_call_chain(&cpu_chain, CPU_POST_DEAD... Sadly, it does so while holding the cpu_add_remove_lock, which is happens to have the dependencies outlined in the original email...
The same happens in cpu_down() (without leading _) which you can trigger from sysfs by manually removing the CPU, so it's not hibernate specific.
Anyway, you can have a deadlock like this:
CPU 3 CPU 2 CPU 1 suspend/hibernate something: rtnl_lock() device_pm_lock() -> mutex_lock(&dpm_list_mtx)
mutex_lock(&dpm_list_mtx)
linkwatch_work -> rtnl_lock() disable_nonboot_cpus() -> flush CPU 3 workqueue
johannes
[unhandled content-type:application/pgp-signature] | |