lkml.org 
[lkml]   [2018]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patches in this message
    /
    From
    SubjectRe: PEBS level 2/3 breaks dwarf unwinding! [WAS: Re: Broken dwarf unwinding - wrong stack pointer register value?]
    Date
    On Montag, 12. November 2018 04:26:37 CET Andi Kleen wrote:
    > On Sat, Nov 10, 2018 at 09:50:05PM -0500, Travis Downs wrote:
    > > On Sat, Nov 10, 2018 at 8:07 PM Andi Kleen <ak@linux.intel.com> wrote:
    > > On Sat, Nov 10, 2018 at 04:42:48PM -0500, Travis Downs wrote:
    > > > I guess this problem doesn't occur for LBR unwinding since the LBR
    > > > records are captured at the same
    > > > moment in time as the PEBS record, so reflect the correct branch
    > > > sequence.
    > >
    > > Actually it happens with LBRs too, but it always gives the backtrace
    > > consistently at the PMI trigger point.
    > >
    > > That's weird - so the LBR records are from the PMI point, but the rest
    > > of
    > > the PEBS record comes from the PEBS trigger point? Or the LBR isn't
    > > part
    > > of PEBS at all?
    >
    > LBR is not part of PEBS, but is collected separately in the PMI handler.
    >
    > > > overhead calculations will be based on the captured stacks, I guess
    > > > -
    > > > but when I annotate, will the values I see correspond to the PEBS
    > > > IPs
    > > > or the PMI IPs?
    > >
    > > Based on PEBS IPs.
    > >
    > > It would be a good idea to add a check to perf report
    > > that the two IPs are different, and if they differ
    > > add some indicator to the sample. This could be a new sort key,
    > > although that would waste some space on the screen, or something
    > > else.
    > >
    > > In the case that PEBS events are used, the IP will differ essentially
    > > 100%
    > > of the time, right? That is, there will always be *some* skid.
    >
    > I wouldn't say that. It depends on what the CPU is doing and the IPC
    > of the code.
    >
    > Also the backtrace inconsistency can only happen if the sample races with
    > function return. If you don't then the backtrace will point
    > to the correct function, even though the unwind IP is different.
    >
    > For example in the common case where you profile a long loop it
    > is unlikely to happen.
    >
    > > indicating otherwise above), I could imagine a hybrid mode where LBR is
    > > used to go back some number of calls and then dwarf or FP or whatever
    > > unwinding takes over, because the further down the stack you do the
    > > more
    > > likely the PEBS trigger point and PMI point are likely to have a
    > > consistent stack.
    >
    > Could collect numbers how often it happens, but it would surprise
    > me if anything complicated is worth it. I would just do the minimum fixes
    > to address the unwinder errors, and perhaps add the "unwind ip differs"
    > indication.

    I now have a preliminary WIP patch up and running (see attached), which works
    for my usecase and improves perf noticeably. All traces of "unknown" frames
    are eradicated, i.e. unwinding now works for 100% of the samples!

    There are some remaining open questions on my side:

    1) Do we really want to change the API of perf_event_overflow_* and
    perf_event_output_* and adapt all its users? To me, it seems as if only PEBS
    and IBS would want to pass distinct register sets for iregs and uregs. All
    other users of the API would continue to pass the same set. Changing the
    central API produces a lot of churn for no good reason. Does anyone see an
    alternative to this?

    The only alternative idea I have right now is to temporarily change the
    sample_type in __intel_pmu_pebs_event before we call perf_event_output /
    perf_event_overflow. I.e. unset PERF_SAMPLE_REGS_INTR, then sample the regs
    manually from iregs before calling perf_event_{overflow,output}, then set
    PERF_SAPMLE_REGS_INTR again. Or we could introduce a custom flag similar to
    __PERF_SAMPLE_CALLCHAIN_EARLY here...

    2) How do we want to do »the "unwind ip differs" indication« as Andi puts it?
    I.e. on the perf report/script side, how should we display the stacks?
    Something like the following annotation maybe?


    ```
    cpp-inlining 2605 [-01] 57.870061: 701199 cycles:pppu:
    7fc1042797b4 __hypot_finite+0x154 (/usr/lib/libm-2.28.so)
    7fc10425faf8 hypotf32x+0x18 (/usr/lib/libm-2.28.so) (unwind ip
    differs)
    5622c7452128 main+0x88 (/tmp/cpp-inlining)
    7fc104096222 __libc_start_main+0xf2 (/usr/lib/libc-2.28.so)
    5622c74521ed _start+0x2d (/tmp/cpp-inlining)
    ```

    3) I suggest we always keep the first frame and sample IP from the user regs,
    i.e. the accurate PEBS/IBS IP. Then we add the following frames from unwinding
    the ustack with the iregs. But what do we do with the first iregs IP? If we
    add it, then we could see the same frame with slightly different IP, like in
    the following, which is undesired I believe:


    ```
    cpp-inlining 2605 [-01] 57.870061: 701199 cycles:pppu:
    7fc1042797b4 __hypot_finite+0x154 (/usr/lib/libm-2.28.so)
    7fc1042797b5 __hypot_finite+0x155 (/usr/lib/libm-2.28.so)
    7fc10425faf8 hypotf32x+0x18 (/usr/lib/libm-2.28.so) (unwind ip
    differs)
    5622c7452128 main+0x88 (/tmp/cpp-inlining)
    7fc104096222 __libc_start_main+0xf2 (/usr/lib/libc-2.28.so)
    5622c74521ed _start+0x2d (/tmp/cpp-inlining)
    ```

    But always skipping the IP is also sometimes wrong, like in this case:

    ```
    cpp-inlining 2605 [-01] 57.862313: 694984 cycles:pppu:
    7fc1042797b9 __hypot_finite+0x159 (/usr/lib/libm-2.28.so)
    5622c7452128 main+0x88 (/tmp/cpp-inlining)
    7fc104096222 __libc_start_main+0xf2 (/usr/lib/libc-2.28.so)
    5622c74521ed _start+0x2d (/tmp/cpp-inlining)
    ```

    Here, we are missing the hypotf32x call inbetween __hypot_finite and main.

    Do we want to introduce some heuristic on how handle these scenarios? I.e. if
    uregs->ip and iregs->ip point to the same function symbol, then skip the frame
    for iregs->ip, otherwise add it?

    Thanks
    --
    Milian Wolff | milian.wolff@kdab.com | Senior Software Engineer
    KDAB (Deutschland) GmbH, a KDAB Group company
    Tel: +49-30-521325470
    KDAB - The Qt, C++ and OpenGL ExpertsFrom 422d2a95eff344407ec425f0de55b264841d1757 Mon Sep 17 00:00:00 2001
    From: Milian Wolff <milian.wolff@kdab.com>
    Date: Wed, 14 Nov 2018 14:10:47 +0100
    Subject: [PATCH 1/2] [WIP] perf: make it possible to collect both, iregs and
    uregs

    Previously, only one set of registers was stored in the perf
    data for both, user and interrupt registers. Now, two distinct
    sets can be sampled.

    Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
    Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Jiri Olsa <jolsa@redhat.com>
    ---
    arch/x86/events/amd/ibs.c | 2 +-
    arch/x86/events/core.c | 2 +-
    arch/x86/events/intel/core.c | 2 +-
    arch/x86/events/intel/ds.c | 7 +++----
    arch/x86/events/intel/knc.c | 2 +-
    arch/x86/events/intel/p4.c | 2 +-
    arch/x86/kernel/ptrace.c | 2 +-
    arch/x86/kvm/pmu.c | 4 ++--
    drivers/oprofile/nmi_timer_int.c | 2 +-
    include/linux/perf_event.h | 18 +++++++++++------
    kernel/events/core.c | 34 ++++++++++++++++----------------
    kernel/trace/bpf_trace.c | 2 +-
    kernel/watchdog_hld.c | 2 +-
    13 files changed, 43 insertions(+), 38 deletions(-)

    diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
    index d50bb4dc0650..567db8878511 100644
    --- a/arch/x86/events/amd/ibs.c
    +++ b/arch/x86/events/amd/ibs.c
    @@ -670,7 +670,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs)
    data.raw = &raw;
    }

    - throttle = perf_event_overflow(event, &data, &regs);
    + throttle = perf_event_overflow(event, &data, &regs, iregs);
    out:
    if (throttle)
    perf_ibs_stop(event, 0);
    diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
    index 106911b603bd..acdcafa57ca0 100644
    --- a/arch/x86/events/core.c
    +++ b/arch/x86/events/core.c
    @@ -1493,7 +1493,7 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
    if (!x86_perf_event_set_period(event))
    continue;

    - if (perf_event_overflow(event, &data, regs))
    + if (perf_event_overflow(event, &data, regs, regs))
    x86_pmu_stop(event, 0);
    }

    diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
    index 273c62e81546..2156620b3d9e 100644
    --- a/arch/x86/events/intel/core.c
    +++ b/arch/x86/events/intel/core.c
    @@ -2299,7 +2299,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
    if (has_branch_stack(event))
    data.br_stack = &cpuc->lbr_stack;

    - if (perf_event_overflow(event, &data, regs))
    + if (perf_event_overflow(event, &data, regs, regs))
    x86_pmu_stop(event, 0);
    }

    diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
    index b7b01d762d32..018fc0649033 100644
    --- a/arch/x86/events/intel/ds.c
    +++ b/arch/x86/events/intel/ds.c
    @@ -639,7 +639,7 @@ int intel_pmu_drain_bts_buffer(void)
    * the sample.
    */
    rcu_read_lock();
    - perf_prepare_sample(&header, &data, event, &regs);
    + perf_prepare_sample(&header, &data, event, &regs, &regs);

    if (perf_output_begin(&handle, event, header.size *
    (top - base - skip)))
    @@ -1273,7 +1273,6 @@ static void setup_pebs_sample_data(struct perf_event *event,
    set_linear_ip(regs, pebs->ip);
    }

    -
    if ((sample_type & (PERF_SAMPLE_ADDR | PERF_SAMPLE_PHYS_ADDR)) &&
    x86_pmu.intel_cap.pebs_format >= 1)
    data->addr = pebs->dla;
    @@ -1430,7 +1429,7 @@ static void __intel_pmu_pebs_event(struct perf_event *event,

    while (count > 1) {
    setup_pebs_sample_data(event, iregs, at, &data, &regs);
    - perf_event_output(event, &data, &regs);
    + perf_event_output(event, &data, &regs, iregs);
    at += x86_pmu.pebs_record_size;
    at = get_next_pebs_record_by_bit(at, top, bit);
    count--;
    @@ -1442,7 +1441,7 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
    * All but the last records are processed.
    * The last one is left to be able to call the overflow handler.
    */
    - if (perf_event_overflow(event, &data, &regs)) {
    + if (perf_event_overflow(event, &data, &regs, iregs)) {
    x86_pmu_stop(event, 0);
    return;
    }
    diff --git a/arch/x86/events/intel/knc.c b/arch/x86/events/intel/knc.c
    index 618001c208e8..9ea5a13af83f 100644
    --- a/arch/x86/events/intel/knc.c
    +++ b/arch/x86/events/intel/knc.c
    @@ -252,7 +252,7 @@ static int knc_pmu_handle_irq(struct pt_regs *regs)

    perf_sample_data_init(&data, 0, event->hw.last_period);

    - if (perf_event_overflow(event, &data, regs))
    + if (perf_event_overflow(event, &data, regs, regs))
    x86_pmu_stop(event, 0);
    }

    diff --git a/arch/x86/events/intel/p4.c b/arch/x86/events/intel/p4.c
    index d32c0eed38ca..704457b5f49a 100644
    --- a/arch/x86/events/intel/p4.c
    +++ b/arch/x86/events/intel/p4.c
    @@ -1037,7 +1037,7 @@ static int p4_pmu_handle_irq(struct pt_regs *regs)
    continue;


    - if (perf_event_overflow(event, &data, regs))
    + if (perf_event_overflow(event, &data, regs, regs))
    x86_pmu_stop(event, 0);
    }

    diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
    index ffae9b9740fd..13b2230e5e9b 100644
    --- a/arch/x86/kernel/ptrace.c
    +++ b/arch/x86/kernel/ptrace.c
    @@ -499,7 +499,7 @@ static int genregs_set(struct task_struct *target,

    static void ptrace_triggered(struct perf_event *bp,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    int i;
    struct thread_struct *thread = &(current->thread);
    diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
    index 58ead7db71a3..b556b2d467e1 100644
    --- a/arch/x86/kvm/pmu.c
    +++ b/arch/x86/kvm/pmu.c
    @@ -57,7 +57,7 @@ static void kvm_pmi_trigger_fn(struct irq_work *irq_work)

    static void kvm_perf_overflow(struct perf_event *perf_event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    struct kvm_pmc *pmc = perf_event->overflow_handler_context;
    struct kvm_pmu *pmu = pmc_to_pmu(pmc);
    @@ -71,7 +71,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event,

    static void kvm_perf_overflow_intr(struct perf_event *perf_event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    struct kvm_pmc *pmc = perf_event->overflow_handler_context;
    struct kvm_pmu *pmu = pmc_to_pmu(pmc);
    diff --git a/drivers/oprofile/nmi_timer_int.c b/drivers/oprofile/nmi_timer_int.c
    index f343bd96609a..110dfef21420 100644
    --- a/drivers/oprofile/nmi_timer_int.c
    +++ b/drivers/oprofile/nmi_timer_int.c
    @@ -28,7 +28,7 @@ static struct perf_event_attr nmi_timer_attr = {

    static void nmi_timer_callback(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    event->hw.interrupts = 0; /* don't throttle interrupts */
    oprofile_add_sample(regs, 0);
    diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
    index 53c500f0ca79..3a989c64c2c7 100644
    --- a/include/linux/perf_event.h
    +++ b/include/linux/perf_event.h
    @@ -506,7 +506,8 @@ struct perf_sample_data;

    typedef void (*perf_overflow_handler_t)(struct perf_event *,
    struct perf_sample_data *,
    - struct pt_regs *regs);
    + struct pt_regs *regs,
    + struct pt_regs *iregs);

    /*
    * Event capabilities. For event_caps and groups caps.
    @@ -966,21 +967,26 @@ extern void perf_output_sample(struct perf_output_handle *handle,
    extern void perf_prepare_sample(struct perf_event_header *header,
    struct perf_sample_data *data,
    struct perf_event *event,
    - struct pt_regs *regs);
    + struct pt_regs *regs,
    + struct pt_regs *iregs);

    extern int perf_event_overflow(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs);
    + struct pt_regs *regs,
    + struct pt_regs *iregs);

    extern void perf_event_output_forward(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs);
    + struct pt_regs *regs,
    + struct pt_regs *iregs);
    extern void perf_event_output_backward(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs);
    + struct pt_regs *regs,
    + struct pt_regs *iregs);
    extern void perf_event_output(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs);
    + struct pt_regs *regs,
    + struct pt_regs *iregs);

    static inline bool
    is_default_overflow_handler(struct perf_event *event)
    diff --git a/kernel/events/core.c b/kernel/events/core.c
    index 84530ab358c3..1b57602dc6d8 100644
    --- a/kernel/events/core.c
    +++ b/kernel/events/core.c
    @@ -6369,7 +6369,7 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
    void perf_prepare_sample(struct perf_event_header *header,
    struct perf_sample_data *data,
    struct perf_event *event,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    u64 sample_type = event->attr.sample_type;

    @@ -6474,7 +6474,7 @@ void perf_prepare_sample(struct perf_event_header *header,
    /* regs dump ABI info */
    int size = sizeof(u64);

    - perf_sample_regs_intr(&data->regs_intr, regs);
    + perf_sample_regs_intr(&data->regs_intr, iregs);

    if (data->regs_intr.regs) {
    u64 mask = event->attr.sample_regs_intr;
    @@ -6492,7 +6492,7 @@ void perf_prepare_sample(struct perf_event_header *header,
    static __always_inline void
    __perf_event_output(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs,
    + struct pt_regs *regs, struct pt_regs *iregs,
    int (*output_begin)(struct perf_output_handle *,
    struct perf_event *,
    unsigned int))
    @@ -6503,7 +6503,7 @@ __perf_event_output(struct perf_event *event,
    /* protect the callchain buffers */
    rcu_read_lock();

    - perf_prepare_sample(&header, data, event, regs);
    + perf_prepare_sample(&header, data, event, regs, iregs);

    if (output_begin(&handle, event, header.size))
    goto exit;
    @@ -6519,25 +6519,25 @@ __perf_event_output(struct perf_event *event,
    void
    perf_event_output_forward(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    - __perf_event_output(event, data, regs, perf_output_begin_forward);
    + __perf_event_output(event, data, regs, iregs, perf_output_begin_forward);
    }

    void
    perf_event_output_backward(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    - __perf_event_output(event, data, regs, perf_output_begin_backward);
    + __perf_event_output(event, data, regs, iregs, perf_output_begin_backward);
    }

    void
    perf_event_output(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    - __perf_event_output(event, data, regs, perf_output_begin);
    + __perf_event_output(event, data, regs, iregs, perf_output_begin);
    }

    /*
    @@ -7738,7 +7738,7 @@ int perf_event_account_interrupt(struct perf_event *event)

    static int __perf_event_overflow(struct perf_event *event,
    int throttle, struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    int events = atomic_read(&event->event_limit);
    int ret = 0;
    @@ -7765,7 +7765,7 @@ static int __perf_event_overflow(struct perf_event *event,
    perf_event_disable_inatomic(event);
    }

    - READ_ONCE(event->overflow_handler)(event, data, regs);
    + READ_ONCE(event->overflow_handler)(event, data, regs, iregs);

    if (*perf_event_fasync(event) && event->pending_kill) {
    event->pending_wakeup = 1;
    @@ -7777,9 +7777,9 @@ static int __perf_event_overflow(struct perf_event *event,

    int perf_event_overflow(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    - return __perf_event_overflow(event, 1, data, regs);
    + return __perf_event_overflow(event, 1, data, regs, iregs);
    }

    /*
    @@ -7842,7 +7842,7 @@ static void perf_swevent_overflow(struct perf_event *event, u64 overflow,

    for (; overflow; overflow--) {
    if (__perf_event_overflow(event, throttle,
    - data, regs)) {
    + data, regs, regs)) {
    /*
    * We inhibit the overflow from happening when
    * hwc->interrupts == MAX_INTERRUPTS.
    @@ -8550,7 +8550,7 @@ static void bpf_overflow_handler(struct perf_event *event,
    if (!ret)
    return;

    - event->orig_overflow_handler(event, data, regs);
    + event->orig_overflow_handler(event, data, regs, regs);
    }

    static int perf_event_set_bpf_handler(struct perf_event *event, u32 prog_fd)
    @@ -9152,7 +9152,7 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer)

    if (regs && !perf_exclude_event(event, regs)) {
    if (!(event->attr.exclude_idle && is_idle_task(current)))
    - if (__perf_event_overflow(event, 1, &data, regs))
    + if (__perf_event_overflow(event, 1, &data, regs, regs))
    ret = HRTIMER_NORESTART;
    }

    diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
    index 08fcfe440c63..6faf12fd6114 100644
    --- a/kernel/trace/bpf_trace.c
    +++ b/kernel/trace/bpf_trace.c
    @@ -392,7 +392,7 @@ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
    if (unlikely(event->oncpu != cpu))
    return -EOPNOTSUPP;

    - perf_event_output(event, sd, regs);
    + perf_event_output(event, sd, regs, regs);
    return 0;
    }

    diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
    index 71381168dede..5f4e18d003bb 100644
    --- a/kernel/watchdog_hld.c
    +++ b/kernel/watchdog_hld.c
    @@ -109,7 +109,7 @@ static struct perf_event_attr wd_hw_attr = {
    /* Callback function for perf event subsystem */
    static void watchdog_overflow_callback(struct perf_event *event,
    struct perf_sample_data *data,
    - struct pt_regs *regs)
    + struct pt_regs *regs, struct pt_regs *iregs)
    {
    /* Ensure the watchdog never gets throttled */
    event->hw.interrupts = 0;
    --
    2.19.1
    From 721bb20a8a7d1ff2f7b062f8d92f50c743883d35 Mon Sep 17 00:00:00 2001
    From: Milian Wolff <milian.wolff@kdab.com>
    Date: Wed, 14 Nov 2018 14:18:56 +0100
    Subject: [PATCH 2/2] [WIP] perf unwind: use iregs for unwinding

    TODO: only use it if available
    TODO: figure out when to skip iregs->ip frame, and when
    to use it (e.g. when function for iregs->ip and uregs->ip
    differs?)

    Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
    Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Jiri Olsa <jolsa@redhat.com>
    ---
    tools/perf/util/unwind-libunwind-local.c | 10 +++++-----
    1 file changed, 5 insertions(+), 5 deletions(-)

    diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
    index 79f521a552cf..39f19673cc34 100644
    --- a/tools/perf/util/unwind-libunwind-local.c
    +++ b/tools/perf/util/unwind-libunwind-local.c
    @@ -492,12 +492,12 @@ static int access_mem(unw_addr_space_t __maybe_unused as,
    int ret;

    /* Don't support write, probably not needed. */
    - if (__write || !stack || !ui->sample->user_regs.regs) {
    + if (__write || !stack || !ui->sample->intr_regs.regs) {
    *valp = 0;
    return 0;
    }

    - ret = perf_reg_value(&start, &ui->sample->user_regs,
    + ret = perf_reg_value(&start, &ui->sample->intr_regs,
    LIBUNWIND__ARCH_REG_SP);
    if (ret)
    return ret;
    @@ -541,7 +541,7 @@ static int access_reg(unw_addr_space_t __maybe_unused as,
    return 0;
    }

    - if (!ui->sample->user_regs.regs) {
    + if (!ui->sample->intr_regs.regs) {
    *valp = 0;
    return 0;
    }
    @@ -550,7 +550,7 @@ static int access_reg(unw_addr_space_t __maybe_unused as,
    if (id < 0)
    return -EINVAL;

    - ret = perf_reg_value(&val, &ui->sample->user_regs, id);
    + ret = perf_reg_value(&val, &ui->sample->intr_regs, id);
    if (ret) {
    pr_err("unwind: can't read reg %d\n", regnum);
    return ret;
    @@ -716,7 +716,7 @@ static int _unwind__get_entries(unwind_entry_cb_t cb, void *arg,
    .machine = thread->mg->machine,
    };

    - if (!data->user_regs.regs)
    + if (!data->intr_regs.regs)
    return -EINVAL;

    if (max_stack <= 0)
    --
    2.19.1
    [unhandled content-type:application/pkcs7-signature]
    \
     
     \ /
      Last update: 2018-11-14 14:21    [W:3.155 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site