Messages in this thread | | | From | (Eric W. Biederman) | Subject | Re: /proc dcache deadlock in do_exit | Date | Tue, 27 Nov 2007 18:43:02 -0700 |
| |
Andrew Morton <akpm@linux-foundation.org> writes:
> On Tue, 27 Nov 2007 14:20:22 +0100 > Andrea Arcangeli <andrea@suse.de> wrote: > >> Hi, >> >> this patch fixes a sles9 system hang in start_this_handle from a >> customer with some heavy workload where all tasks are waiting on >> kjournald to commit the transaction, but kjournald waits on t_updates >> to go down to zero (it never does). This was reported as a lowmem >> shortage deadlock but when checking the debug data I noticed the VM >> wasn't under pressure at all (well it was really under vm pressure, >> because lots of tasks hanged in the VM prune_dcache methods trying to >> flush dirty inodes, but no task was hanging in GFP_NOFS mode, the >> holder of the journal handle should have if this was a vm issue in the >> first place). No task was apparently holding the leftover handle in >> the committing transaction, so I deduced t_updates was stuck to 1 >> because a journal_stop was never run by some path (this turned out to >> be correct). With a debug patch adding proper reverse links and stack >> trace logging in ext3 deployed in production, I found journal_stop is >> never run because mark_inode_dirty_sync is called inside release_task >> called by do_exit. (that was quite fun because I would have never >> thought about this subtleness, I thought a regular path in ext3 had a >> bug and it forgot to call journal_stop) >> >> do_exit->release_task->mark_inode_dirty_sync->schedule() (will never >> come back to run journal_stop) > > I don't see why the schedule() will not return? Because the task has > PF_EXITING set? Doesn't TASK_DEAD do that?
Yes, why do we not come back from schedule?
If we are not allowed to schedule after setting PF_EXITING before we set TASK_DEAD that entire code path sounds brittle and error prone.
> What are the implications of not running shrink_dcache_parent() on the exit > path sometimes? We'll leave procfs stuff behind? Will they be reaped by > memory pressure later on?
It should. I think the reaping is just an optimization. Because we know we will never need those dentries again, and we can pin them by open directories or opening files. What I don't know off the top of my head is if there is a d_drop equivalent going on that might be a problem if we don't address it.
Eric
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |