lkml.org 
[lkml]   [2011]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] perf_events: fix and improve x86 event scheduling
From
Date
On Mon, 2011-11-07 at 12:01 +0100, Stephane Eranian wrote:
> + /*
> + * scan all possible counters for this event
> + * but use the one with the smallest counter weight,
> + * i.e., give a chance to other less constrained events
> + */
> for_each_set_bit(j, c->idxmsk, X86_PMC_IDX_MAX) {
>
> + if (test_bit(j, used_mask))
> + continue;
> +
> + if (wcnt[j] < min_wcnt) {
> + min_wcnt = wcnt[j];
> + wcnt_idx = j;
> + }
> + }

The problem with this is that it will typically hit the worst case for
Intel fixed-purpose events since the fixed purpose counters have the
highest counter index and their constraint masks are the heaviest in the
system ensuring we hit the max loop count on the top loop.

Then again, with Robert's approach we have to mark all fixed purpose
thingies as redo and we might hit some weird cases there as well, can't
seem to get me brain straight on that case though.




\
 
 \ /
  Last update: 2011-11-07 13:13    [W:0.175 / U:13.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site