lkml.org 
[lkml]   [2011]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: perf_events: questions about cpu_has_ht_siblings() and offcore support
From
Date
On Fri, 2011-04-22 at 22:41 +0800, Stephane Eranian wrote:
> On Fri, Apr 22, 2011 at 4:31 PM, Lin Ming <ming.m.lin@intel.com> wrote:
> > On Fri, 2011-04-22 at 21:46 +0800, Stephane Eranian wrote:
> >> On Fri, Apr 22, 2011 at 3:26 PM, Lin Ming <ming.m.lin@intel.com> wrote:
> >> > On Fri, 2011-04-22 at 20:59 +0800, Stephane Eranian wrote:
> >> >> Lin,
> >> >>
> >> >> In arch/x86/include/asm/smp.h, you added:
> >> >>
> >> >> static inline bool cpu_has_ht_siblings(void)
> >> >> {
> >> >> bool has_siblings = false;
> >> >> #ifdef CONFIG_SMP
> >> >> has_siblings = cpu_has_ht && smp_num_siblings > 1;
> >> >> #endif
> >> >> return has_siblings;
> >> >> }
> >> >>
> >> >> I am wondering about the goal of this function.
> >> >>
> >> >> Is it supposed to return whether or not HT is enabled?
> >> >>
> >> >> Ht enabled != HT supported
> >> >
> >> > It's used to check if HT is supported.
> >> >
> >> Ok, that makes more sense.
> >>
> >> > But unfortunately, we didn't find a way to check if HT is enabled.
> >> > So I just check if HT is supported.
> >> >
> >> >>
> >> >> +static inline int is_ht_enabled(void)
> >> >> +{
> >> >> + bool has_ht = false;
> >> >> +#ifdef CONFIG_SMP
> >> >> + int w;
> >> >> + w = cpumask_weight(cpu_sibling_mask(smp_processor_id()));
> >> >> + has_ht = cpu_has_ht && w > 1;
> >> >> +#endif
> >> >> + return has_ht;
> >> >> +}
> >> >>
> >> >> OTOH, you need some validation even in the case HT is off. No two events
> >> >> scheduled together on the same PMU can have different values for the extra
> >
> > I got it now.
> >
> >> >> reg. Thus, the fact that cpu_has_ht_siblings() is imune to HT state helps here,
> >> >> but then what's the point of it?
> >> >
> >> > The points is to avoid the percore resource allocations(which are used
> >> > to sync between HTs) if HT is not supported.
> >> >
> >> But if you check x86_pmu.extra_regs, that should do it as well.
> >
> > I don't understand here.
> > Did you mean we can avoid the percore resource allocations by just
> > checking x86_pmu.extra_regs? How?
>
> Is you have not extra_regs, i.e., regs that are shared, then why would
> you need the percore allocation?

But "extra_regs" does not imply they are regs that are shared.
It only means some events need to set extra registers to work.

>
>
> >
> >>
> >> Suppose HT is disabled and I do:
> >>
> >> perf stat -e offcore_response_0:dmd_data_rd,offcore_response_0:dmnd_rfo ......
> >>
> >> This should still not be allowed.
> >
> > Ah, you are right.
> > We have to always check extra_config even HT is disabled and/or
> > supported.
> >
> Yes. You won't need the locking, though.
>
> >>
> >> I think in this case, HT supported will cause your code to still allocate the
> >> per-core struct. There will be no matching of per-core structs in starting().
> >> So I suspect things work.
> >
> > This has no problem.
> > If "no matching" found, then below if(...) statement won't be executed.
> >
> > intel_pmu_cpu_starting:
> >
> > for_each_cpu(i, topology_thread_cpumask(cpu)) {
> > struct intel_percore *pc = per_cpu(cpu_hw_events, i).per_core;
> >
> > if (pc && pc->core_id == core_id) {
> > kfree(cpuc->per_core);
> > cpuc->per_core = pc;
> > break;
> > }
> > }
> >
> > Or do you see other potential problem?
> >
> I think when HT is off, you will never execute the if statement, because
> no core_id will ever match another.

The "if" statement is not executed so the per-core structs allocated in
intel_pmu_cpu_prepare is not freed.

This is the intended behavior since we don't have a way to check if HT
is off.

>
> Another thing that struck me when locking at the hotplug code for
> per-core is the lack of locking. I assume that's because hotplug
> cpu is inherently serialized. You cannot have a CPU going offline
> and one going online at the same time. is that right? Otherwise
> I wonder if you could simply do per_core->refcnt++ vs.
> per_core->refcnt--




\
 
 \ /
  Last update: 2011-04-22 17:05    [W:0.062 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site