lkml.org 
[lkml]   [2008]   [Sep]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [rfc][patch] queueing spinlocks?
[resend, as the first had a problem going out]

Hi Nick,
Cool stuff...see inline

Nick Piggin wrote:
> I've implemented a sort of spin local, queueing MCS lock that uses per-cpu
> nodes that can be shared by multiple locks. I guess it is preferable to
> remove global locks, but some don't seem to be going anywhere soon.
>
> The only issue is that only one set of nodes can be actively used for a lock
> at once, so if we want to nest these locks, we have to use different
> sets for each one. This shouldn't be much of a problem because we don't have
> too many "big" locks, and yet fewer ones that are nested in one another.
>
> With this modification to MCS locks, each lock is pretty small in size, so it
> could even be used for some per-object locks if we really wanted.
>
> I've converted dcache lock as well... it shows improved results on a 64-way
> Altix. Unfortunately this adds an extra atomic to the unlock path. I didn't
> look too hard at array based queue locks, there might be a a type of those
> that would work better.
>
> Index: linux-2.6/include/linux/mcslock.h
> ===================================================================
> --- /dev/null
> +++ linux-2.6/include/linux/mcslock.h
> @@ -0,0 +1,76 @@
> +/*
> + * "Shared-node" MCS lock.
> + * Nick Piggin <npiggin@suse.de>
> + */
> +#ifndef _LINUX_MCSLOCK_H
> +#define _LINUX_MCSLOCK_H
> +
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +#include <linux/irqflags.h>
> +#include <asm/atomic.h>
> +#include <asm/system.h>
> +#include <asm/processor.h>
> +
> +#ifndef CONFIG_SMP
> +typdef struct {
> +} mcslock_t;
> +
> +static inline void mcs_lock_init(mcslock_t *lock)
> +{
> +}
> +
> +static inline int mcs_is_locked(mcslock_t *lock)
> +{
> + return 0;
> +}
> +
> +static inline void mcs_unlock_wait(mcslock_t *lock)
> +{
> +}
> +
> +static inline void mcs_lock(mcslock_t *lock, int nest)
> +{
> +}
> +static inline int mcs_trylock(mcslock_t *lock, int nest)
> +{
> + return 1;
> +}
> +static inline void mcs_unlock(mcslock_t *lock, int nest)
> +{
> +}
> +
> +#else /*!CONFIG_SMP*/
> +
> +typedef struct {
> + atomic_t cpu;
> +} mcslock_t;
> +
> +#define MCS_CPU_NONE 0x7fffffff
> +
> +#define DEFINE_MCS_LOCK(name) mcslock_t name = { .cpu = ATOMIC_INIT(MCS_CPU_NONE) }
> +static inline void mcs_lock_init(mcslock_t *lock)
> +{
> + atomic_set(&lock->cpu, MCS_CPU_NONE); /* unlocked */
> +}
> +
> +static inline int mcs_is_locked(mcslock_t *lock)
> +{
> + return atomic_read(&lock->cpu) != MCS_CPU_NONE;
> +}
> +
> +static inline void mcs_unlock_wait(mcslock_t *lock)
> +{
> + while (mcs_is_locked(lock))
> + cpu_relax();
> +}
> +
> +extern void mcs_lock(mcslock_t *lock, int nest);
> +extern int mcs_trylock(mcslock_t *lock, int nest);
> +extern void mcs_unlock(mcslock_t *lock, int nest);
> +
> +#endif /*!CONFIG_SMP*/
> +
> +extern int atomic_dec_and_mcslock(atomic_t *atomic, mcslock_t *lock, int nest);
>

I would prefer to see this done as a polymorhpic atomic_dec_and_lock()
call with something like Ingo's "PICK_OP" method (currently used in -rt)
rather than expanding the atomic_X namespace. I haven't looked into it
to make sure its plausible, but I do not see any reasons from 30k feet
why it would not. Its not a huge deal either way, but just something to
consider.

-Greg



[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2008-09-05 19:03    [W:0.050 / U:0.380 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site