Messages in this thread | | | From | Uros Bizjak <> | Date | Thu, 11 Apr 2024 21:08:14 +0200 | Subject | Re: [PATCH 2/2] locking/pvqspinlock: Use try_cmpxchg() in qspinlock_paravirt.h |
| |
On Thu, Apr 11, 2024 at 3:35 PM Uros Bizjak <ubizjak@gmail.com> wrote: > > On Thu, Apr 11, 2024 at 3:24 PM Ingo Molnar <mingo@kernel.org> wrote: > > > > > > * Uros Bizjak <ubizjak@gmail.com> wrote: > > > > > - locked = cmpxchg_release(&lock->locked, _Q_LOCKED_VAL, 0); > > > - if (likely(locked == _Q_LOCKED_VAL)) > > > + if (try_cmpxchg_release(&lock->locked, &locked, 0); > > > return; ^------------ ??? > > > > This doesn't appear to be a particularly well-tested patch. ;-) > > Ouch, embarrassing... oh it is a generic function, conditionally compiled with > > #ifndef __pv_queued_spin_lock > #endif > > and x86 defines its own function ...
Looking at the assembly of the fixed function, it looks that improved generic function is better than x86_64 special asm:
This is the new generic function:
0000000000000750 <__pv_queued_spin_unlock>: 750: f3 0f 1e fa endbr64 754: b8 01 00 00 00 mov $0x1,%eax 759: 31 d2 xor %edx,%edx 75b: f0 0f b0 17 lock cmpxchg %dl,(%rdi) 75f: 75 05 jne 766 <__pv_queued_spin_unlock+0x16> 761: e9 00 00 00 00 jmp 766 <__pv_queued_spin_unlock+0x16> 762: R_X86_64_PLT32 __x86_return_thunk-0x4 766: 0f b6 f0 movzbl %al,%esi 769: e9 02 ff ff ff jmp 670 <__pv_queued_spin_unlock_slowpath>
and the x86_64 asm version:
0000000000000050 <__raw_callee_save___pv_queued_spin_unlock>: 50: f3 0f 1e fa endbr64 54: 52 push %rdx 55: b8 01 00 00 00 mov $0x1,%eax 5a: 31 d2 xor %edx,%edx 5c: f0 0f b0 17 lock cmpxchg %dl,(%rdi) 60: 3c 01 cmp $0x1,%al 62: 75 06 jne 6a <.slowpath> 64: 5a pop %rdx 65: e9 00 00 00 00 jmp 6a <.slowpath>
I didn't investigate slowpath, but the generic fast-path part is certainly better than x86_64 special asm.
Uros.
| |