lkml.org 
[lkml]   [2020]   [Sep]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 00/21] kprobes: Unify kretprobe trampoline handlers and make kretprobe lockless
On Wed, 2 Sep 2020 11:36:13 +0200
peterz@infradead.org wrote:

> On Wed, Sep 02, 2020 at 05:17:55PM +0900, Masami Hiramatsu wrote:
>
> > > Ok, but then lockdep will yell at you if you have that enabled and run
> > > the unoptimized things.
> >
> > Oh, does it warn for all spinlock things in kprobes if it is unoptimized?
> > Hmm, it has to be noted in the documentation.
>
> Lockdep will warn about spinlocks used in NMI context that are also used
> outside NMI context.

OK, but raw_spin_lock_irqsave() will not involve lockdep, correct?

> Now, for the kretprobe that kprobe_busy flag prevents the actual
> recursion self-deadlock, but lockdep isn't smart enough to see that.
>
> One way around this might be to use SINGLE_DEPTH_NESTING for locks when
> we use them from INT3 context. That way they'll have a different class
> and lockdep will not see the recursion.

Hmm, so lockdep warns only when it detects the spinlock in NMI context,
and int3 is now always NMI, thus all spinlock (except raw_spinlock?)
in kprobe handlers should get warned, right?
I have tested this series up to [16/21] with optprobe disabled, but
I haven't see the lockdep warnings.

>
> pre_handler_kretprobe() is always called from INT3, right?

No, not always, it can be called from optprobe (same as original code
context) or ftrace handler.
But if you set 0 to /proc/sys/debug/kprobe_optimization, and compile
the kernel without function tracer, it should always be called from
INT3.

>
> Something like the below might then work...

OK, but I would like to reproduce the lockdep warning on this for
confirmation.

Thank you,

>
> ---
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 287b263c9cb9..b78f4fe08e86 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1255,11 +1255,11 @@ __acquires(hlist_lock)
> NOKPROBE_SYMBOL(kretprobe_hash_lock);
>
> static void kretprobe_table_lock(unsigned long hash,
> - unsigned long *flags)
> + unsigned long *flags, int nesting)
> __acquires(hlist_lock)
> {
> raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash);
> - raw_spin_lock_irqsave(hlist_lock, *flags);
> + raw_spin_lock_irqsave_nested(hlist_lock, *flags, nesting);
> }
> NOKPROBE_SYMBOL(kretprobe_table_lock);
>
> @@ -1326,7 +1326,7 @@ void kprobe_flush_task(struct task_struct *tk)
> INIT_HLIST_HEAD(&empty_rp);
> hash = hash_ptr(tk, KPROBE_HASH_BITS);
> head = &kretprobe_inst_table[hash];
> - kretprobe_table_lock(hash, &flags);
> + kretprobe_table_lock(hash, &flags, 0);
> hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> if (ri->task == tk)
> recycle_rp_inst(ri, &empty_rp);
> @@ -1361,7 +1361,7 @@ static void cleanup_rp_inst(struct kretprobe *rp)
>
> /* No race here */
> for (hash = 0; hash < KPROBE_TABLE_SIZE; hash++) {
> - kretprobe_table_lock(hash, &flags);
> + kretprobe_table_lock(hash, &flags, 0);
> head = &kretprobe_inst_table[hash];
> hlist_for_each_entry_safe(ri, next, head, hlist) {
> if (ri->rp == rp)
> @@ -1950,7 +1950,7 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)
>
> /* TODO: consider to only swap the RA after the last pre_handler fired */
> hash = hash_ptr(current, KPROBE_HASH_BITS);
> - raw_spin_lock_irqsave(&rp->lock, flags);
> + raw_spin_lock_irqsave_nested(&rp->lock, flags, SINGLE_DEPTH_NESTING);
> if (!hlist_empty(&rp->free_instances)) {
> ri = hlist_entry(rp->free_instances.first,
> struct kretprobe_instance, hlist);
> @@ -1961,7 +1961,7 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)
> ri->task = current;
>
> if (rp->entry_handler && rp->entry_handler(ri, regs)) {
> - raw_spin_lock_irqsave(&rp->lock, flags);
> + raw_spin_lock_irqsave_nested(&rp->lock, flags, SINGLE_DEPTH_NESTING);
> hlist_add_head(&ri->hlist, &rp->free_instances);
> raw_spin_unlock_irqrestore(&rp->lock, flags);
> return 0;
> @@ -1971,7 +1971,7 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs)
>
> /* XXX(hch): why is there no hlist_move_head? */
> INIT_HLIST_NODE(&ri->hlist);
> - kretprobe_table_lock(hash, &flags);
> + kretprobe_table_lock(hash, &flags, SINGLE_DEPTH_NESTING);
> hlist_add_head(&ri->hlist, &kretprobe_inst_table[hash]);
> kretprobe_table_unlock(hash, &flags);
> } else {


--
Masami Hiramatsu <mhiramat@kernel.org>

\
 
 \ /
  Last update: 2020-09-02 17:48    [W:0.121 / U:1.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site