Messages in this thread |  | | From | Manfred Spraul <> | Subject | Re: [lkp] [ipc/sem.c] 5864a2fd30: aim9.shared_memory.ops_per_sec -13.0% | Date | Wed, 19 Oct 2016 06:38:14 +0200 |
| |
Hi,
as discussed before: The root cause for the performance regression is the smp_mb() that was added into the fast path.
I see two options: 1) switch to full spin_lock()/spin_unlock() for the rare codepath, then the fast path doesn't need the smp_mb() anymore.
2) confirm that no arch needs the smp_mb(), then remove it. - powerpc is ok after commit 6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()") - arm is ok after commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") - for x86 is ok after commit 2c6100227116 ("locking/qspinlock: Fix spin_unlock_wait() some more") - for the remaining SMP architectures, I don't have a status.
I would prefer the approach 1: The memory ordering provided by spin_lock()/spin_unlock() is clear.
Thus: Attached are patches for approach 1:
- Patch 1 replaces spin_unlock_wait() with spin_lock()/spin_unlock() and removes all memory barriers that are then unnecessary.
- Patch 2 adds the hysteresis code: This makes the rare codepath extremely rare. It also corrects some wrong comments, e.g. regarding switching from global lock to per-sem lock (we "must' switch, not we "can" switch as written right now).
The patches passed stress-testing.
What do you think? My initial idea was to aim for 4.10, then we have more time to decide.
-- Manfred
|  |