lkml.org 
[lkml]   [2008]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subjectsystem_call() - Re: [Bug #11308] tbench regression on each kernel release from 2.6.22 -> 2.6.28

    * Ingo Molnar <mingo@elte.hu> wrote:

    > 100.000000 total
    > ................
    > 1.508888 system_call

    that's an easy one:

    ffffffff8020be00: 97321 <system_call>:
    ffffffff8020be00: 97321 0f 01 f8 swapgs
    ffffffff8020be03: 53089 66 66 66 90 xchg %ax,%ax
    ffffffff8020be07: 1524 66 66 90 xchg %ax,%ax
    ffffffff8020be0a: 0 66 66 90 xchg %ax,%ax
    ffffffff8020be0d: 0 66 66 90 xchg %ax,%ax

    ffffffff8020be10: 1511 <system_call_after_swapgs>:
    ffffffff8020be10: 1511 65 48 89 24 25 18 00 mov %rsp,%gs:0x18
    ffffffff8020be17: 0 00 00
    ffffffff8020be19: 0 65 48 8b 24 25 10 00 mov %gs:0x10,%rsp
    ffffffff8020be20: 0 00 00
    ffffffff8020be22: 1490 fb sti

    syscall entry instruction costs - unavoidable security checks, etc. -
    hardware costs.

    But looking at this profile made me notice this detail:

    ENTRY(system_call_after_swapgs)

    Combined with this alignment rule we have in
    arch/x86/include/asm/linkage.h on 64-bit:

    #ifdef CONFIG_X86_64
    #define __ALIGN .p2align 4,,15
    #define __ALIGN_STR ".p2align 4,,15"
    #endif

    while it inserts NOP sequences, that is still +13 bytes of excessive,
    stupid, and straight in our syscall entry path alignment padding.

    system_call_after_swapgs is an utter slowpath in any case. The interim
    fix is below - although it needs more thinking and probably should be
    done via an ENTRY_UNALIGNED() method as well, for slowpath targets.

    With that we get this much nicer entry sequence:

    ffffffff8020be00: 544323 <system_call>:
    ffffffff8020be00: 544323 0f 01 f8 swapgs

    ffffffff8020be03: 197954 <system_call_after_swapgs>:
    ffffffff8020be03: 197954 65 48 89 24 25 18 00 mov %rsp,%gs:0x18
    ffffffff8020be0a: 0 00 00
    ffffffff8020be0c: 6578 65 48 8b 24 25 10 00 mov %gs:0x10,%rsp
    ffffffff8020be13: 0 00 00
    ffffffff8020be15: 0 fb sti
    ffffffff8020be16: 0 48 83 ec 50 sub $0x50,%rsp

    And we should probably weaken the generic code alignment rules as well
    on x86. I'll do some measurements of it.

    Ingo

    Index: linux/arch/x86/kernel/entry_64.S
    ===================================================================
    --- linux.orig/arch/x86/kernel/entry_64.S
    +++ linux/arch/x86/kernel/entry_64.S
    @@ -315,7 +315,8 @@ ENTRY(system_call)
    * after the swapgs, so that it can do the swapgs
    * for the guest and jump here on syscall.
    */
    -ENTRY(system_call_after_swapgs)
    +.globl system_call_after_swapgs
    +system_call_after_swapgs:

    movq %rsp,%gs:pda_oldrsp
    movq %gs:pda_kernelstack,%rsp

    \
     
     \ /
      Last update: 2008-11-17 23:03    [W:3.249 / U:0.832 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site