lkml.org 
[lkml]   [2010]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] perf_events: AMD event scheduling (v2)
From
>> >> +     nb_id = amd_get_nb_id(cpu);
>> >> +     if (nb_id == BAD_APICID)
>> >> +             return;
>> >> +
>> >> +     cpu1 = &per_cpu(cpu_hw_events, cpu);
>> >> +     cpu1->amd_nb = NULL;
>> >> +
>> >> +     raw_spin_lock(&amd_nb_lock);
>> >> +
>> >> +     for_each_online_cpu(i) {
>> >> +             cpu2 = &per_cpu(cpu_hw_events, i);
>> >> +             nb = cpu2->amd_nb;
>> >> +             if (!nb)
>> >> +                     continue;
>> >> +             if (nb->nb_id == nb_id)
>> >> +                     goto found;
>> >> +     }
>> >> +
>> >> +     nb = amd_alloc_nb(cpu, nb_id);
>> >> +     if (!nb) {
>> >> +             pr_err("perf_events: failed to allocate NB storage for CPU%d\n", cpu);
>> >> +             raw_spin_unlock(&amd_nb_lock);
>> >> +             return;
>> >> +     }
>> >> +found:
>> >> +     nb->refcnt++;
>> >> +     cpu1->amd_nb = nb;
>> >> +
>> >> +     raw_spin_unlock(&amd_nb_lock);
>> >
>> > Can't this be simplified by using the cpu to node mask?
>>
>> You mean to find the NB that corresponds to a CPU?
>
> Yep, saves having to poke at all cpus.

That's not what the loop is about.

amd_get_nb_id() is the function which returns the id
of the NB based on cpu. It is not clear to me whether
NB and NUMA node always perfectly overlap. What
matter here is the HW config, not how NUMA nodes
were setup.

We allocate one amd_nb struct per NB. Once we get
our NB, we need to find if it has already been allocated
and refcnt++, otherwise we must allocate it.

>> > Also, I think this is buggy in that:
>> >
>> >  perf_disable();
>> >  event->pmu->disable(event);
>> >  ...
>> >  event->pmu->enable(event);
>> >  perf_enable();
>> >
>> > can now fail, I think we need to move the put_event_constraint() from
>> > x86_pmu_disable() into x86_perf_enable() or something.
>>
>> Constraints are reserved during x86_scheduling(), not during enable().
>> So if you had a conflict it was detected earlier than that.
>
> What I'm worried about is:
>
> CPU A and B are of the same NorthBridge and all node counters are taken.
>
> CPU-A                   CPU-B
>
> perf_disable();
> event->pmu->disable(event)
>  x86_pmu.put_event_constraint() /* free slot n */
>
>                        event->pmu->enable(event);
>                          x86_schedule_events();
>                            x86_pmu.get_event_constraint() /* grab slot n */
>
> event->pmu->enable(event)
>  x86_schedule_events()
>    x86_pmu.get_event_constraint() /* FAIL */
> perf_enable();
>
> That means you cannot disable/enable the same event within a
> perf_disable/enable section.
>
Yes but I would say that is because of the logic behind the enable/disable
interface. Here you are disabling "for a short period", you know you're going
to re-enable. Yet you are using an API that is a generic enable/disable.
You would need to pass some argument to disable() to say "temporary"
or "stop but don't release the resource".
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-02-04 17:35    [W:0.047 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site