Messages in this thread | | | Date | Thu, 1 Mar 2018 22:12:56 +0100 (CET) | From | Thomas Gleixner <> | Subject | Re: [PATCH] x86/kprobe: unbalanced preempt counter with CONFIG_PREEMPT=y |
| |
On Mon, 15 Jan 2018, yangbo@deepin.com wrote:
> From: Yang Bo <yangbo@deepin.com> > > For each kprobe hook, preempt_disable() is called twice, but > preempt_enable() is called once, when CONFIG_PREEMPT=y. No > matter kprobe uses ftrace or int3. Then the preempt counter > overflows, the process might be preempted and migrate to another > cpu. It causes the kprobe int3 being treated as not kprobe's int3, > kernel dies. > > The backtrace looks like this: > > int3: 0000 [#1] PREEMPT SMP > Modules linkedd in: ... > CPU: 1 PID: 2618 COMM: ... > Hardware: ... > task: ... > RIP: 0010[<ffffffff81457cd5>] [<ffffffff81457cd5>] jprobe_return_end+0x0/0xb > ... > RIP [<ffffffff81457cd5>] jprobe_return_end+0x0/0xb > RSP ... > > Signed-off-by: Yang Bo <yangbo@deepin.com> > --- > arch/x86/kernel/kprobes/core.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c > index bd36f3c33cd0..1ea5c9caa8f1 100644 > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -611,6 +611,7 @@ static void setup_singlestep(struct kprobe *p, struct pt_regs *regs, > regs->ip = (unsigned long)p->addr; > else > regs->ip = (unsigned long)p->ainsn.insn; > + preempt_enable_no_resched();
Good catch, but to be honest the proper fix is to move the preempt_enable() invocation to the callsite. This kind of asymetric handling is error prone and leads exactly to that type of bugs. Just adding more preempt_enable() invocations is proliferating the problem.
Looking deeper into kprobe_int3_handler() there are more places which do this completely unomprehensible conditional preempt count handling via skip_singlestep() and setup_detour_execution(). What an unholy mess.
Masami?
Thanks,
tglx
| |