lkml.org 
[lkml]   [2014]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] dynticks: dynticks_idle is only modified locally use this_cpu ops
On Wed, 3 Sep 2014, Paul E. McKenney wrote:

> > Well, a shared data structure would be cleaner in general but there are
> > certainly other approaches.
>
> Per-CPU variables -are- a shared data structure.

No the intent is for them to be for the particular cpu and therefore
there is limited support for sharing. Its not a shared data structure in
the classic sense.

The code in the rcu subsystem operates like other percpu code: There is a
modification of local variables by the current processor and other
processors inspect the state once in awhile. The other percpu code does
not need atomics and barriers. RCU for some reason (that is not clear to
me) does.

> > But lets focus on the dynticks_idle case we are discussing here rather
> > than tackle the more difficult other atomics. What is checked in the loop
> > over the remote cpus is the dynticks_idle value plus
> > dynticks_idle_jiffies. So it seems that memory ordering is only used to
> > ensure that the jiffies are seen correctly.
> >
> > In that case both the dynticks_idle and dynticks_idle_jiffies could be
> > placed in one 64 bit value. If this is stored and retrieved as one then
> > there is no issue with ordering anymore and the barriers would no longer
> > be needed.
>
> If there was an upper bound on the propagation of values through a system,
> I could buy this.

What is different in propagation speeds? The atomic read on the function
that checks for the quiescent period having passed is a regular read
anyways. The atomic_inc makes the cacheline propagate faster through the
system? I understand that it evicts the cachelines from other processors
caches (containing other percpu data by the way). That is the desired
effect?

> But Mike Galbraith checked the overhead of ->dynticks_idle and found
> it to be too small to measure. So doesn't seem to be a problem worth
> extraordinary efforts, especially given that many systems can avoid
> it simply by leaving CONFIG_NO_HZ_SYSIDLE=n.

The code looks fragile and bound to have issues in the future given the
barriers/atomics etc. Its going to be cleaner without that.

And we are right now focusing on the simplest case. The atomics scheme is
used multiple times in the RCU subsystem. There is more weird looking code
there like atomic_add using zero etc.






\
 
 \ /
  Last update: 2014-09-03 20:21    [W:0.057 / U:0.724 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site