lkml.org 
[lkml]   [2013]   [Oct]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: x86, perf: throttling issues with long nmi latencies
    On Mon, Oct 14, 2013 at 04:35:49PM -0400, Don Zickus wrote:
    > I have been playing with quad socket Ivy Bridges for awhile and have seen
    > numerous "perf samples too long" messages, to the point, the machine is
    > unusable for any perf analyzing.

    We've seen the same problem on our large systems. Dave
    did some fixes in mainline, but they only work around the problem.

    One main cause I believe is that dynamic period, which often
    goes down to insanely low values for cycles.

    This also causes a lot of measurement overhead, without really giving better
    data.

    If you use -c ... with a reasonable period the problem completely
    goes away (with pmu-tools ocperf stat -c default sets a reasonable default)

    > So I tried to investigate the source of the NMI latencies using the
    > traditional 'rdtscll()' command. That failed miserably. Then it was
    > pointed out to me that rdtscll() is terrible for benchmarking due to
    > out-of-order execution by the Intel processors. This Intel whitepaper
    > describes a better way using cpuid and rdtsc:

    We just used ftrace function tracer.

    > the longest one first. It seems to be 'copy_user_from_nmi'
    >
    > intel_pmu_handle_irq ->
    > intel_pmu_drain_pebs_nhm ->
    > __intel_pmu_drain_pebs_nhm ->
    > __intel_pmu_pebs_event ->
    > intel_pmu_pebs_fixup_ip ->
    > copy_from_user_nmi
    >
    > In intel_pmu_pebs_fixup_ip(), if the while-loop goes over 50, the sum of
    > all the copy_from_user_nmi latencies seems to go over 1,000,000 cycles

    fixup_ip has to decode a whole basic block, to correct off by one.
    I'm not sure why the copy dominates though. But copy_from_user_nmi
    does a lot of nasty things.

    I would just use :p which skips this. The single instruction correction
    is not worth all the overhead, and there is always more skid anyways
    even with the correction.

    The good news is that Haswell fixes the overhead, :pp is as fast as :p

    > (there are some cases where only 10 iterations are needed to go that high
    > too, but in generall over 50 or so). At this point copy_user_from_nmi
    > seems to account for over 90% of the nmi latency.

    Yes saw the same. It's unclear why it is that expensive.
    I've also seen the copy dominate with -g.

    Also for some reason it seems to hurt much more on larger systems
    (cache misses?) Unfortunately it's hard to use perf to analyze
    perf, that was the road block last time I understanding this better.

    One guess was that if you profile the same code running on many
    cores the copy*user_nmi code will have a very hot cache line
    with the page reference count.

    Some obvious improvements are likely possible:

    The copy function is pretty dumb -- for example it repins the pages
    for each access. It would be likely much faster to batch that
    and only do it once per backtrace/decode. This would need
    a new interface.

    I suppose there would be a way to do this access without actually
    incrementing the ref count (e.g. with a seqlock like scheme
    or just using TSX)

    But if you don't do the IP correction and only the stack access
    in theory it should be possible to avoid the majority of changes.

    First level recommendations:

    - Always use -c ... / or -F ..., NEVER dynamic period
    - Don't use :pp

    -Andi



    \
     
     \ /
      Last update: 2013-10-15 00:01    [W:2.774 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site