Messages in this thread Patch in this message | | | Date | Wed, 16 Jul 2008 21:15:38 +0400 | From | Oleg Nesterov <> | Subject | PATCH? exit_mm: update the comments, move atomic_inc(mm_count) outside of mmap_sem |
| |
(textually depends on "[PATCH 2/2] coredump: exit_mm: clear ->mm first, then play with ->core_state")
More the "is my understanding correct" than a patch. Assuming it is correct:
- explain the subtle atomic_inc(&mm->mm_count).
- move it outside of mm->mmap_sem, otherwise it is not clear why do we need this lock to bump ->mm_count.
- "fix" the comment above task_lock(). This lock is really needed for get_task_mm/etc. Yes, we can change the code to use smp_mb() + spin_unlock_wait() but I guess we also have another reason.
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
--- 26-rc2/kernel/exit.c~EXIT_MM 2008-07-15 20:24:50.000000000 +0400 +++ 26-rc2/kernel/exit.c 2008-07-16 20:59:34.000000000 +0400 @@ -680,6 +680,14 @@ static void exit_mm(struct task_struct * mm_release(tsk, mm); if (!mm) return; + + /* + * We are going to clear ->mm. The next schedule() after that + * will assume we did context_switch()->atomic_inc(mm_count). + * finish_task_switch()->mmdrop() will "fix" the counter. + */ + atomic_inc(&mm->mm_count); + BUG_ON(mm != tsk->active_mm); /* * Serialize with any possible pending coredump. * We must hold mmap_sem around checking core_state @@ -689,9 +697,7 @@ static void exit_mm(struct task_struct * */ down_read(&mm->mmap_sem); core_state = mm->core_state; - atomic_inc(&mm->mm_count); - BUG_ON(mm != tsk->active_mm); - /* more a memory barrier than a real lock */ + /* disables preemption, we must not schedule before enter_lazy_tlb() */ task_lock(tsk); tsk->mm = NULL; up_read(&mm->mmap_sem);
| |