lkml.org 
[lkml]   [2014]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation
    On Tue, Jan 28, 2014 at 01:19:10PM -0500, Waiman Long wrote:
    > For single-thread performance (no contention), a 256K lock/unlock
    > loop was run on a 2.4Ghz Westmere x86-64 CPU. The following table
    > shows the average time (in ns) for a single lock/unlock sequence
    > (including the looping and timing overhead):
    >
    > Lock Type Time (ns)
    > --------- ---------
    > Ticket spinlock 14.1
    > Queue spinlock (Normal) 8.8*

    What CONFIG_NR_CPUS ?

    Because for CONFIG_NR_CPUS < 128 (or 256 if you got !PARAVIRT), the fast
    path code should be:

    ticket:

    mov $0x100,eax
    lock xadd %ax,(%rbx)
    cmp %al,%ah
    jne ...

    although my GCC is being silly and writes:

    mov $0x100,eax
    lock xadd %ax,(%rbx)
    movzbl %ah,%edx
    cmp %al,%dl
    jne ...

    Which seems rather like a waste of a perfectly good cycle.

    With a bigger NR_CPUS you do indeed need more ops:

    mov $0x10000,%edx
    lock xadd %edx,(%rbx)
    mov %edx,%ecx
    shr $0x10,%ecx
    cmp %dx,%cx
    jne ...


    Whereas for the straight cmpxchg() you'd get something relatively simple
    like:

    mov %edx,%eax
    lock cmpxchg %ecx,(%rbx)
    cmp %edx,%eax
    jne ...



    Anyway, as soon as you get some (light) contention you're going to tank
    because you have to pull in extra cachelines, which is sad.


    I suppose we could from the ticket code more and optimize the
    uncontended path, but that'll make the contended path more expensive
    again, although probably not as bad as hitting a new cacheline.


    \
     
     \ /
      Last update: 2014-01-31 16:41    [W:3.142 / U:0.116 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site