lkml.org 
[lkml]   [2017]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v8 5/6] lib/dlock-list: Enable faster lookup with hashing
From
Date
On 11/01/2017 04:40 AM, Jan Kara wrote:
> On Tue 31-10-17 14:50:59, Waiman Long wrote:
>> Insertion and deletion is relatively cheap and mostly contention
>> free for dlock-list. Lookup, on the other hand, can be rather costly
>> because all the lists in a dlock-list will have to be iterated.
>>
>> Currently dlock-list insertion is based on the cpu that the task is
>> running on. So a given object can be inserted into any one of the
>> lists depending on what the current cpu is.
>>
>> This patch provides an alternative way of list selection. The caller
>> can provide a object context which will be hashed to one of the list
>> in a dlock-list. The object can then be added into that particular
>> list. Lookup can be done by iterating elements in the provided list
>> only instead of all the lists in a dlock-list.
>>
>> The new APIs are:
>>
>> struct dlock_list_head *dlock_list_hash(struct dlock_list_heads *, void *);
>> void dlock_list_add(struct dlock_list_node *, struct dlock_list_head *);
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
> Hum, do we have any users for this API? And wouldn't they also need to
> control how many lists are allocated then?

This patch is supposed to be used by the epoll patch from Davidlohr. As
he has retracted the patch, I can drop this patch also. The number of
lists scale with the number of CPU cores in the system whether it is
used one way or the others.

Cheers,
Longman

\
 
 \ /
  Last update: 2017-11-01 14:18    [W:0.044 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site