lkml.org 
[lkml]   [2016]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/4] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled


On Mon, 25 Apr 2016, Peter Zijlstra wrote:

> On Fri, Apr 22, 2016 at 05:27:18PM -0700, Vikas Shivappa wrote:
>> During RMID recycling, when an event loses the RMID we saved the counter
>> for group leader but it was not being saved for all the events in an
>> event group. This would lead to a situation where if 2 perf instances
>> are counting the same PID one of them would not see the updated count
>> which other perf instance is seeing. This patch tries to fix the issue
>> by saving the count for all the events in the same event group.
>
>
>> @@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
>> * If our RMID is being deallocated, perform a read now.
>> */
>> if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
>>
>> + rr = __init_rr(old_rmid, group->attr.config, 0);
>> cqm_mask_call(&rr);
>> local64_set(&group->count, atomic64_read(&rr.value));
>> + list_for_each_entry(event, head, hw.cqm_group_entry) {
>> + if (event->hw.is_group_event) {
>> +
>> + evttype = event->attr.config;
>> + rr = __init_rr(old_rmid, evttype, 0);
>> +
>> + cqm_mask_call(&rr);
>> + local64_set(&event->count,
>> + atomic64_read(&rr.value));
>
> Randomly indent much?

Will fix. It has been added by mistake in advance for the next patch

Thanks,
Vikas

>
>> + }
>> + }
>> }
>

\
 
 \ /
  Last update: 2016-04-25 18:41    [W:0.090 / U:0.724 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site