lkml.org 
[lkml]   [2015]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 1/2] coredump: ensure all coredumping tasks have
task_will_free_mem() is wrong in many ways, and in particular the
SIGNAL_GROUP_COREDUMP check is not reliable: a task can participate
in the coredumping without SIGNAL_GROUP_COREDUMP bit set.

change zap_threads() paths to always set SIGNAL_GROUP_COREDUMP even
if other CLONE_VM processes can't react to SIGKILL. Fortunately, at
least oom-kill case if fine; it kills all tasks sharing the same mm,
so it should also kill the process which actually dumps the core.

The change in prepare_signal() is not strictly necessary, it just
ensures that the patch does not bring another subtle behavioural
change. But it reminds us that this SIGNAL_GROUP_EXIT/COREDUMP case
needs more changes.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
fs/coredump.c | 12 ++++++------
kernel/signal.c | 2 +-
2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/coredump.c b/fs/coredump.c
index 53d7d46..4fed8d0 100644
--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -282,11 +282,13 @@ out:
return ispipe;
}

-static int zap_process(struct task_struct *start, int exit_code)
+static int zap_process(struct task_struct *start, int exit_code, int flags)
{
struct task_struct *t;
int nr = 0;

+ /* ignore all signals except SIGKILL, see prepare_signal() */
+ start->signal->flags = SIGNAL_GROUP_COREDUMP | flags;
start->signal->group_exit_code = exit_code;
start->signal->group_stop_count = 0;

@@ -313,10 +315,8 @@ static int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
spin_lock_irq(&tsk->sighand->siglock);
if (!signal_group_exit(tsk->signal)) {
mm->core_state = core_state;
- nr = zap_process(tsk, exit_code);
tsk->signal->group_exit_task = tsk;
- /* ignore all signals except SIGKILL, see prepare_signal() */
- tsk->signal->flags = SIGNAL_GROUP_COREDUMP;
+ nr = zap_process(tsk, exit_code, 0);
clear_tsk_thread_flag(tsk, TIF_SIGPENDING);
}
spin_unlock_irq(&tsk->sighand->siglock);
@@ -367,8 +367,8 @@ static int zap_threads(struct task_struct *tsk, struct mm_struct *mm,
if (p->mm) {
if (unlikely(p->mm == mm)) {
lock_task_sighand(p, &flags);
- nr += zap_process(p, exit_code);
- p->signal->flags = SIGNAL_GROUP_EXIT;
+ nr += zap_process(p, exit_code,
+ SIGNAL_GROUP_EXIT);
unlock_task_sighand(p, &flags);
}
break;
diff --git a/kernel/signal.c b/kernel/signal.c
index f2cbd4e..c0b01fe 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -788,7 +788,7 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force)
sigset_t flush;

if (signal->flags & (SIGNAL_GROUP_EXIT | SIGNAL_GROUP_COREDUMP)) {
- if (signal->flags & SIGNAL_GROUP_COREDUMP)
+ if (!(signal->flags & SIGNAL_GROUP_EXIT))
return sig == SIGKILL;
/*
* The process is in the middle of dying, nothing to do.
--
2.4.3


\
 
 \ /
  Last update: 2015-09-29 18:21    [W:0.128 / U:1.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site