lkml.org 
[lkml]   [2017]   [Aug]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v7 1/2] perf/core: use rb trees for pinned/flexible groups
    Date
    Alexey Budankov <alexey.budankov@linux.intel.com> writes:

    > @@ -3091,61 +3231,55 @@ static void cpu_ctx_sched_out(struct perf_cpu_context *cpuctx,
    > }
    >
    > static void
    > -ctx_pinned_sched_in(struct perf_event_context *ctx,
    > - struct perf_cpu_context *cpuctx)
    > +ctx_pinned_sched_in(struct perf_event *event,
    > + struct perf_cpu_context *cpuctx,
    > + struct perf_event_context *ctx)

    If you're doing this, you also need to rename the function, because it
    now schedules in one event and not a context. But better just keep it as
    is.

    > {
    > - struct perf_event *event;
    > -
    > - list_for_each_entry(event, &ctx->pinned_groups, group_entry) {

    Why not put your new iterator here in place of the old one, instead of
    moving things around? Because what follows is hard to read and is also
    completely unnecessary:

    > - if (event->state <= PERF_EVENT_STATE_OFF)
    > - continue;
    > - if (!event_filter_match(event))
    > - continue;
    > + if (event->state <= PERF_EVENT_STATE_OFF)
    > + return;
    > + if (!event_filter_match(event))
    > + return;

    like this,

    >
    > - /* may need to reset tstamp_enabled */
    > - if (is_cgroup_event(event))
    > - perf_cgroup_mark_enabled(event, ctx);
    > + /* may need to reset tstamp_enabled */
    > + if (is_cgroup_event(event))
    > + perf_cgroup_mark_enabled(event, ctx);

    or this

    >
    > - if (group_can_go_on(event, cpuctx, 1))
    > - group_sched_in(event, cpuctx, ctx);
    > + if (group_can_go_on(event, cpuctx, 1))
    > + group_sched_in(event, cpuctx, ctx);

    etc, etc.

    > @@ -3156,7 +3290,8 @@ ctx_sched_in(struct perf_event_context *ctx,
    > struct task_struct *task)
    > {
    > int is_active = ctx->is_active;
    > - u64 now;

    Why?

    > + struct perf_event *event;
    > + struct rb_node *node;
    >
    > lockdep_assert_held(&ctx->lock);
    >
    > @@ -3175,7 +3310,7 @@ ctx_sched_in(struct perf_event_context *ctx,
    >
    > if (is_active & EVENT_TIME) {
    > /* start ctx time */
    > - now = perf_clock();
    > + u64 now = perf_clock();

    Why?

    > ctx->timestamp = now;
    > perf_cgroup_set_timestamp(task, ctx);
    > }
    > @@ -3185,11 +3320,19 @@ ctx_sched_in(struct perf_event_context *ctx,
    > * in order to give them the best chance of going on.
    > */
    > if (is_active & EVENT_PINNED)
    > - ctx_pinned_sched_in(ctx, cpuctx);
    > + perf_event_groups_for_each(event, node,
    > + &ctx->pinned_groups, group_node,
    > + group_list, group_entry)
    > + ctx_pinned_sched_in(event, cpuctx, ctx);

    So this perf_event_groups_for_each() can just move into
    ctx_*_sched_in(), can't it?

    Regards,
    --
    Alex

    \
     
     \ /
      Last update: 2017-08-23 13:22    [W:4.055 / U:1.252 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site