lkml.org 
[lkml]   [2019]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
On Wed, Apr 03, 2019 at 12:33:20PM -0400, Waiman Long wrote:
> static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32
> val)
> {
>         if (static_branch_unlikely(&use_numa_spinlock))
>                 numa_queued_spin_lock_slowpath(lock, val);
>         else   
>                 native_queued_spin_lock_slowpath(lock, val);
> }

That's horrible for the exact reason you state.

> Alternatively, we can also call numa_queued_spin_lock_slowpath() in
> native_queued_spin_lock_slowpath() if we don't want to increase the code
> size of spinlock call sites.

Yeah, still don't much like that though, we're littering the fast path
of that slow path with all sorts of crap.

\
 
 \ /
  Last update: 2019-04-03 19:17    [W:0.169 / U:1.596 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site