lkml.org 
[lkml]   [2010]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH -v4 0/2] Lockless memory allocator and list
From
Date
On Wed, 2010-11-17 at 02:04 +0800, Peter Zijlstra wrote:
> On Tue, 2010-11-16 at 08:38 -0800, Linus Torvalds wrote:
> >
> > I kind of like the lock-less list implementation (it could easily be
> > useful for random things, and it's very simple).
>
> Yes, there's various implementations floating around, and we already
> have one in-kernel ( net/rds/xlist.h ), mason and axboe and me have been
> kicking around various patches using that thing in other circumstances
> as well.
>
> [ At some point we had perf -- what now is kernel/irq_work.c -- using
> it as well, but the new code grew too complex due to requirements
> from Huang ]

I think it should be possible for them to use the general lockless list
implementation in the patch. I think this will reduce some code
duplication/complexity. Do you agree?

> > And I don't think the
> > notion of a lockless memory allocator is wrong either, although it
> > looks a lot more specialized than the list thing (the solution to
> > lockless allocations is generally simply to do them ahead of time).
> >
> Right, I don't generally object to lockless things, but they either need
> to be 1) faster than the existing code, and/or 2) have a very convincing
> use-case (other than performance) for their added complexity.

I will post a generic hardware error reporting mechanism patchset soon.
The lock-less memory allocator is used there. And I think maybe we can
use it in lockdep code too. Which needs to allocate something locklessly
if my understanding is correct.

> Afaict the proposed patch adds lots more LOCK'ed instructions into that
> allocator path than it removes, ie its a slow down for existing users.

Let's take a look at gen_pool_alloc

The locks removed:

- one rwlock: pool->lock
- one spinlock for each chunk: chunk->lock

The LOCK'ed instructions added:

- one or two cmpxchg in most cases. But if there is heavy contention
between users, there will be more cmpxchg. So I suggest to use one
gen_pool for each CPU for heavy contention situation.

BTW: The original gen_pool is designed to deal with special purpose
memory in some drivers. So I don't think performance is a big issue for
it.

Best Regards,
Huang Ying




\
 
 \ /
  Last update: 2010-11-17 02:47    [W:0.059 / U:0.668 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site