lkml.org 
[lkml]   [2019]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V2 04/23] perf/x86/intel: Support adaptive PEBSv4
On Thu, Mar 21, 2019 at 01:56:44PM -0700, kan.liang@linux.intel.com wrote:
> +static inline void *next_pebs_record(void *p)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + unsigned int size;
> +
> + if (x86_pmu.intel_cap.pebs_format < 4)
> + size = x86_pmu.pebs_record_size;
> + else
> + size = cpuc->pebs_record_size;
> + return p + size;
> +}

> @@ -1323,19 +1580,19 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
> if (base == NULL)
> return NULL;
>
> - for (at = base; at < top; at += x86_pmu.pebs_record_size) {
> - struct pebs_record_nhm *p = at;
> + for (at = base; at < top; at = next_pebs_record(at)) {
> + unsigned long status = get_pebs_status(at);

afaict we do not mix base and adaptive records, and thus the above
really could use cpuc->pebs_record_size unconditionally, right?


\
 
 \ /
  Last update: 2019-03-21 22:18    [W:0.091 / U:0.484 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site