lkml.org 
[lkml]   [2012]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 4/6] rbtree: faster augmented insert
From
On Fri, Jul 27, 2012 at 1:04 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, 2012-07-20 at 05:31 -0700, Michel Lespinasse wrote:
>> +static void augment_rotate(struct rb_node *rb_old, struct rb_node *rb_new)
>> +{
>> + struct test_node *old = rb_entry(rb_old, struct test_node, rb);
>> + struct test_node *new = rb_entry(rb_new, struct test_node, rb);
>> +
>> + /* Rotation doesn't change subtree's augmented value */
>> + new->augmented = old->augmented;
>> + old->augmented = augment_recompute(old);
>> +}
>
>> +static inline void augment_propagate(struct rb_node *rb)
>> +{
>> + while (rb) {
>> + struct test_node *node = rb_entry(rb, struct test_node, rb);
>> + node->augmented = augment_recompute(node);
>> + rb = rb_parent(&node->rb);
>> + }
>> +}
>
> So why do we have to introduce these two new function pointers to pass
> along when they can both be trivially expressed in the old single
> augment function?

Its because augment_rotate() needs to be a static function that we can
take the address of and pass along as a callback to the tree
rebalancing functions, while augment_propagate() needs to be an inline
function that gets compiled within an __rb_erase() variant for a given
type of augmented rbtree.

--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.


\
 
 \ /
  Last update: 2012-07-28 00:41    [W:0.668 / U:1.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site