lkml.org 
[lkml]   [2013]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation
On Fri, Aug 02, 2013 at 01:53:22AM +0530, Raghavendra K T wrote:

You need to learn to trim your replies.. I already stopped reading that
paravirt thread because of it. Soon I'll introduce you to my /dev/null
mail reader.

> On 08/01/2013 08:07 AM, Waiman Long wrote:
> >+static __always_inline void queue_spin_lock(struct qspinlock *lock)
> >+{
> >+ if (likely(queue_spin_trylock(lock)))
> >+ return;
> >+ queue_spin_lock_slowpath(lock);
> >+}
>
> quickly falling into slowpath may hurt performance in some cases. no?
>
> Instead, I tried something like this:
>
> #define SPIN_THRESHOLD 64
>
> static __always_inline void queue_spin_lock(struct qspinlock *lock)
> {
> unsigned count = SPIN_THRESHOLD;
> do {
> if (likely(queue_spin_trylock(lock)))
> return;
> cpu_relax();
> } while (count--);
> queue_spin_lock_slowpath(lock);
> }
>
> Though I could see some gains in overcommit, but it hurted undercommit
> in some workloads :(.

This would break the FIFO nature of the lock.



\
 
 \ /
  Last update: 2013-08-02 00:01    [W:0.061 / U:0.884 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site