lkml.org 
[lkml]   [2009]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [tip:perfcounters/core] perf_counter: x86: Fix call-chain support to use NMI-safe methods


    On Mon, 15 Jun 2009, Ingo Molnar wrote:
    >
    > The gist of it is the replacement of iret with this open-coded
    > sequence:
    >
    > +#define NATIVE_INTERRUPT_RETURN_NMI_SAFE pushq %rax; \
    > + movq %rsp, %rax; \
    > + movq 24+8(%rax), %rsp; \
    > + pushq 0+8(%rax); \
    > + pushq 16+8(%rax); \
    > + movq (%rax), %rax; \
    > + popfq; \
    > + ret

    That's an odd way of writing it.

    Don't we have a per-cpu segment here? I'd much rather just see it do
    something like this (_before_ restoring the regular registers)

    movq EIP(%esp),%rax
    movq ESP(%esp),%rdx
    movq %rax,gs:saved_esp
    movq %rdx,gs:saved_eip

    # restore regular regs
    RESTORE_ALL

    # skip eip/esp to get at eflags
    addl $16,%esp
    popfq

    # restore rsp/rip
    movq gs:saved_esp,%rsp
    jmpq *(gs:saved_eip)

    but I haven't thought deeply about it. Maybe there's something wrong with
    the above.

    > If it's faster, this becomes a legit (albeit complex)
    > micro-optimization in a _very_ hot codepath.

    I don't think it's all that hot. It's not like it's the return to user
    mode.

    Linus


    \
     
     \ /
      Last update: 2009-10-18 23:28    [W:0.022 / U:1.192 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site