lkml.org 
[lkml]   [2018]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v9 5/5] lib/dlock-list: Scale dlock_lists_empty()
From
Date
On 10/04/2018 03:16 AM, Jan Kara wrote:
> On Wed 12-09-18 15:28:52, Waiman Long wrote:
>> From: Davidlohr Bueso <dave@stgolabs.net>
>>
>> Instead of the current O(N) implementation, at the cost
>> of adding an atomic counter, we can convert the call to
>> an atomic_read(). The counter only serves for accounting
>> empty to non-empty transitions, and vice versa; therefore
>> only modified twice for each of the lists during the
>> lifetime of the dlock (while used).
>>
>> In addition, to be able to unaccount a list_del(), we
>> add a dlist pointer to each head, thus minimizing the
>> overall memory footprint.
>>
>> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
>> Acked-by: Waiman Long <longman@redhat.com>
> So I was wondering: Is this really worth it? AFAICS we have a single call
> site for dlock_lists_empty() and that happens during umount where we don't
> really care about this optimization. So it seems like unnecessary
> complication to me at this point? If someone comes up with a usecase that
> needs fast dlock_lists_empty(), then sure, we can do this...
>

Yes, that is true. We can skip this patch for the time being until a use
case comes up which requires dlock_lists_empty() to be used in the fast
path.

Cheers,
Longman

\
 
 \ /
  Last update: 2018-10-04 15:42    [W:0.061 / U:0.740 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site