lkml.org 
[lkml]   [2010]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V2 0/6][RFC] futex: FUTEX_LOCK with optional adaptive spinning
On 04/05/2010 11:23 PM, Darren Hart wrote:
> In-Reply-To:
>
> NOT FOR INCLUSION
>
> The following patch series implements a new experimental kernel side futex mutex
> via new FUTEX_LOCK and FUTEX_LOCK_ADAPTIVE futex op codes. The adaptive spin
> follows the kernel mutex model of allowing one spinner until the lock is
> released or the owner is descheduled. The patch currently allows the user to
> specify if they want no spinning, a single adaptive spinner, or multiple
> spinners (aggressive adaptive spinning, or aas... which I have mistyped as "ass"
> enough times to realize a better term is indeed required :-).
>

An interesting (but perhaps difficult to achieve) optimization would be
to spin in userspace.

> I'm using the futex_lock branch of my futextest test suite to gather results.
> The test is performance/futex_lock.c and can be checked out here:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/dvhart/futextest.git
>
> Per Avi's suggestion I added asm instruction based period and duty-cycle options
> to do some testing. A series of plots with 256 threads, one per period length,
> is shown at the URL below. Each plot compares raw futex lock (normal, no
> spinning) with a single spinner (adaptive) and with multiple spinners (aas) for
> each of several duty-cycles to determine performance at various levels of
> contention. The test measure the number of lock/unlock pairs performed per
> second (in thousands).
>
> http://www.kernel.org/pub/linux/kernel/people/dvhart/adaptive_futex/v4/
>
> Avi's suggested starting point was 1000 instructions with a 10% duty cycle.
> That plot "Futex Lock/Unlock (Period=1000, Threads=256)" does appear to be the
> most interesting of the lot. It's not so short that other overhead becomes the
> bottleneck, and not so long so as to make adaptive spinning benefits show up
> as noise in the numbers. The adaptive spin (with a single waiter) consistently
> beats the normal run, and outperforms aas for most duty-cycles. I rechecked a
> few points on this plot to confirm and the relative scores remained consistent.
> These plots were generated using 10,000,000 iterations per datapoint.
>
> Lastly, I should mention that these results all underperform a simple
> pthread_mutex_lock()/pthread_mutex_unlock() pair. I'm looking into why but felt
> I was overdue in sharing what I have to date. A test comparing this to a
> sched_yield() style userspace spinlock would probably be more appropraite.
>
>

How many cores (or hardware threads) does this machine have? At 10%
duty cycle you have 25 waiters behind the lock on average. I don't
think this is realistic, and it means that spinning is invoked only rarely.

I'd be interested in seeing runs where the average number of waiters is
0.2, 0.5, 1, and 2, corresponding to moderate-to-bad contention. 25
average waiters on compute bound code means the application needs to be
rewritten, no amount of mutex tweaking will help it.

Does the wakeup code select the spinning waiter, or just a random waiter?

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.



\
 
 \ /
  Last update: 2010-04-05 23:19    [W:0.109 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site