lkml.org 
[lkml]   [2016]   [Jan]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] perf: Synchronously cleanup child events
Date
Peter Zijlstra <peterz@infradead.org> writes:

> On Fri, Jan 15, 2016 at 04:07:41PM +0200, Alexander Shishkin wrote:
>> int perf_event_release_kernel(struct perf_event *event)
>> {
>> + struct perf_event *child, *tmp;
>> + LIST_HEAD(child_list);
>>
>> + if (!is_kernel_event(event))
>> + perf_remove_from_owner(event);
>>
>> + event->owner = NULL;
>>
>> + /*
>> + * event::child_mutex nests inside ctx::lock, so move children
>> + * to a safe place first and avoid inversion
>> + */
>> + mutex_lock(&event->child_mutex);
>> + list_splice_init(&event->child_list, &child_list);
>> + mutex_unlock(&event->child_mutex);
>
> I suspect this races against inherit_event(), like:
>
> inherit_event() perf_event_release_kernel()
>
> if (is_orphaned_event(parent_event) /* false */
>
> event->owner = NULL
>
> mutex_lock(child_mutex);
> list_splice
> mutex_unlock(child_mutex);
>
> mutex_lock(child_mutex);
> list_add_tail
> mutex_unlock(child_mutex);

Indeed, this is possible.

>
> Something like this would fix that I think, not sure its the best way
> though...
>
>
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -979,8 +979,8 @@ static void put_ctx(struct perf_event_co
> * Lock order:
> * task_struct::perf_event_mutex
> * perf_event_context::mutex
> - * perf_event_context::lock
> * perf_event::child_mutex;
> + * perf_event_context::lock
> * perf_event::mmap_mutex
> * mmap_sem
> */

This is, actually, the order that we have already:

perf_ioctl(): ctx::mutex
-> perf_event_for_each(): event::child_mutex
-> _perf_event_enable(): ctx::lock

that is, ctx::lock already nests inside event::child_mutex. So what
you're suggesting is an ok solution.

Regards,
--
Alex

\
 
 \ /
  Last update: 2016-01-18 13:41    [W:1.772 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site