lkml.org 
[lkml]   [2011]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFT][PATCH] sched, cgroup: Optimize load_balance_fair()
On Wed, Jul 13, 2011 at 11:01:03PM +0200, Peter Zijlstra wrote:
> On Wed, 2011-07-13 at 10:13 -0700, Paul Turner wrote:
> > > +static void update_h_load(long cpu)
> > > +{
> > > + walk_tg_tree(tg_load_down, tg_nop, (void *)cpu);
> > > +}
> >
> > With a list_for_each_entry_reverse_rcu() this could also only operate
> > on the local hierarchy and avoid the tg tree walk.
>
> Ah, sadly that primitive cannot exist, rcu list primitives only keeps
> the fwd link.
>
> Although I guess we could 'fix' that.

We could, at least in theory -- make list_del_rcu() not poison the
->prev link. Or, given that there are use cases that absolutely cannot
tolerate following ->prev links, have a list_del_rcu_both() or something
so that list_del_rcu() keeps its current error checking. Oddly enough,
__list_add_rcu() doesn't need to change because the rcu_assign_pointer()
for the predecessor's ->next pointer covers the successor's ->prev
pointer as well. OK, a comment is clearly needed...

Of course, in a two-way-RCU doubly linked list, p->next->prev is not
necessarily equal to p.

But how deep/wide is the tree and how many cache misses are expected?
Would this solve a real problem?

Thanx, Paul


\
 
 \ /
  Last update: 2011-07-14 02:49    [W:0.040 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site