lkml.org 
[lkml]   [2019]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 0/2] kprobes: Fix kretprobe incorrect stacking order problem
On Mon, 7 Jan 2019 22:19:04 +0100
Andrea Righi <righi.andrea@gmail.com> wrote:

> > > If we put a kretprobe to raw_spin_lock_irqsave() it looks like
> > > kretprobe is going to call kretprobe...
> >
> > Right, but we should be able to add some recursion protection to stop
> > that. I have similar protection in the ftrace code.
>
> If we assume that __raw_spin_lock/unlock*() are always inlined a

I wouldn't assume that.

> possible way to prevent this recursion could be to use directly those
> functions to do locking from the kretprobe trampoline.
>
> But I'm not sure if that's a safe assumption... if not I'll see if I can
> find a better solution.

All you need to do is have a per_cpu variable, where you just do:

preempt_disable_notrace();
if (this_cpu_read(kprobe_recursion))
goto out;
this_cpu_inc(kprobe_recursion);
[...]
this_cpu_dec(kprobe_recursion);
out:
preempt_enable_notrace();

And then just ignore any kprobes that trigger while you are processing
the current kprobe.

Something like that. If you want (or if it already happens) replace
preempt_disable() with local_irq_save().

-- Steve

>
> Thanks,
>
> From: Andrea Righi <righi.andrea@gmail.com>
> Subject: [PATCH] kprobes: prevent recursion deadlock with kretprobe and
> spinlocks
>
> kretprobe_trampoline() is using a spinlock to protect the hash of
> kretprobes. Adding a kretprobe to the spinlock functions may cause
> a recursion deadlock where kretprobe is calling itself:
>
> kretprobe_trampoline()
> -> trampoline_handler()
> -> kretprobe_hash_lock()
> -> raw_spin_lock_irqsave()
> -> _raw_spin_lock_irqsave()
> kretprobe_trampoline from _raw_spin_lock_irqsave => DEADLOCK
>
> kretprobe_trampoline()
> -> trampoline_handler()
> -> recycle_rp_inst()
> -> raw_spin_lock()
> -> _raw_spin_lock()
> kretprobe_trampoline from _raw_spin_lock => DEADLOCK
>
> Use the corresponding inlined spinlock functions to prevent this
> recursion.
>
> Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
> ---
> kernel/kprobes.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index f4ddfdd2d07e..b89bef5e3d80 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1154,9 +1154,9 @@ void recycle_rp_inst(struct kretprobe_instance *ri,
> hlist_del(&ri->hlist);
> INIT_HLIST_NODE(&ri->hlist);
> if (likely(rp)) {
> - raw_spin_lock(&rp->lock);
> + __raw_spin_lock(&rp->lock);
> hlist_add_head(&ri->hlist, &rp->free_instances);
> - raw_spin_unlock(&rp->lock);
> + __raw_spin_unlock(&rp->lock);
> } else
> /* Unregistering */
> hlist_add_head(&ri->hlist, head);
> @@ -1172,7 +1172,7 @@ __acquires(hlist_lock)
>
> *head = &kretprobe_inst_table[hash];
> hlist_lock = kretprobe_table_lock_ptr(hash);
> - raw_spin_lock_irqsave(hlist_lock, *flags);
> + *flags = __raw_spin_lock_irqsave(hlist_lock);
> }
> NOKPROBE_SYMBOL(kretprobe_hash_lock);
>
> @@ -1193,7 +1193,7 @@ __releases(hlist_lock)
> raw_spinlock_t *hlist_lock;
>
> hlist_lock = kretprobe_table_lock_ptr(hash);
> - raw_spin_unlock_irqrestore(hlist_lock, *flags);
> + __raw_spin_unlock_irqrestore(hlist_lock, *flags);
> }
> NOKPROBE_SYMBOL(kretprobe_hash_unlock);
>

\
 
 \ /
  Last update: 2019-01-07 22:29    [W:0.063 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site