lkml.org 
[lkml]   [2009]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH][RFC] perf lock: Distribute numerical IDs for each lock instances
From
On Mon, Dec 14, 2009 at 22:30, Frederic Weisbecker <fweisbec@gmail.com> wrote:
> On Sun, Dec 13, 2009 at 05:02:24PM +0900, Hitoshi Mitake wrote:
>> This patch implements distributing numerical IDs for each lock instances.
>>
>> Principle:
>> At first, kernel holds lock_idtable. It is an double dimension array
>> for each lock class with LOCK_IDTABLE_LENGTH of struct lock_instance_stat.
>>
>> struct lock_instance_stat can have many things for lock stats or something,
>> but it isn't important point.
>>
>> The important point is that index of lock_idtable is used as numerical ID.
>> And this patch makes lockdep_map have this ID. So searching stats data for
>> lock instances are O(1), very fast.
>>
>> And the second important point is that when numerical IDs are distributed
>> for each lock instances.
>> This patch makes lockdep_map have a new member, do_register. This is a pointer of function.
>> When initialized the lock instances (for example: SPIN_DEP_MAP_INIT in spinlock_types.h),
>> this member is initialized with
>> register_lockdep_id() in kernel/lockdep.c .
>> This function receives the adress of lockdep_map (it's owner),
>> and searches lock_idtable, then if unused ID (unused index) is found,
>> set the owner (spinlock_t, rwlock_t, etc. obtained with container_of) of lockdep_map
>> to lock_idtable's unit determined in above.
>>
>> When this do_register() called? lock_acquire() calls it.
>> And once initialize completed, nop_register_lockdep_id() will be
>> assigned to do_register(). It is a nop function.
>>
>> I believe this patch makes implementation of perf lock more easy
>> and precision, and may also help lock people.
>>
>> Benefit:
>>  * Low overhead
>>  * int type ID is easy to deal with
>>
>> Demerit:
>>  * What should kernel do when lock_idtable is exhausted?
>>    (But I think entire number of lock instances are not so many,
>>     and predicting upper limit of it is not hard)
>>  * instances before calling lock_acquire() with cannot be identified
>>  * end of instances cannot be determined
>>
>> This patch is a proof of concept version.
>> There are some rest todos, especially the statement in lock_acquire():
>>       if (lock->do_register)
>> this must be removed in future, this proves that
>> there's some lockdep_map I cannot initializing correctly.
>> For example, rcu_lock_map in kernel/rcupdate.c .
>>
>> And I implemented debugfs entry, lockdep_idtable
>> for dumping ID and name of instances like this,
>> % cat  lockdep_idtable
>>         --- spinlock ---
>> 0: logbuf_lock
>> 1: (console_sem).lock
>> 2: clockevents_lock
>> 3: set_atomicity_lock
>> 4: pgd_lock
>> 5: init_mm.page_table_lock
>> 6: index_init_lock
>> 7: ioapic_lock
>> 8: x86_mce_decoder_chain.lock
>> 9: pcpu_lock
>> 10: perf_resource_lock
>> ...
>>
>> But it is can be merged according to checkpatch.pl,
>> if you like it, could you merge it into perf/lock branch, Ingo?
>> And I really want to hear comments from people! :)
>>
>> Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
>> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Frederic Weisbecker <fweisbec@gmail.com>
>
>
>
> So if I understand well, this maps each lockdep_map
> into a unique index, right?

There's a slightly difference. This patch maps each lock instances
(spinlock_t, rwlock_t, etc) into a unique index.

>
> Then the goal is to be able to identify each lock instances from
> perf lock using a simple array of indexes.
>
> But adding the lockdep_map address from lock events would provide
> you a unique id for each lock instances already. Sure it would
> be addresses instead of indexes but I think a hashlist should do
> the trick already. I'm not sure we need to add more complexity
> inside in-kernel lock debugging while we already have the keys
> to make it scale well in userspace.

The usecase I assumed is (for example) that
dividing copying name of lock instances to userspace from lock trace events.

I think that copying name of lock at every trace event time is not efficient.
For example, ID <-> name table can be made in my way.
So each lock events only have to output it's ID.
Then, perf lock reads the table from file on debugfs.
Finally perf lock can refer the table and obtain name of each lock.
This may reduce the data transfer between kernel and userspace.

But... you are right. This effect can be also obtained by hashlist.
There's no requirement of implementing array.
And optimization should be done after implementation.
I'll back to coding of perf lock, sorry..

# But I think that this is useful to measure the overhead of hashlist! :)

Thanks
Hitoshi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-12-14 15:47    [W:0.193 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site