lkml.org 
[lkml]   [2020]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v7 2/6] rcu/segcblist: Add counters to segcblist datastructure
On Wed, Oct 21, 2020 at 11:33:14AM -0400, joel@joelfernandes.org wrote:
> On Thu, Oct 15, 2020 at 02:21:58PM +0200, Frederic Weisbecker wrote:
> > On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote:
> > > Add counting of segment lengths of segmented callback list.
> > >
> > > This will be useful for a number of things such as knowing how big the
> > > ready-to-execute segment have gotten. The immediate benefit is ability
> > > to trace how the callbacks in the segmented callback list change.
> > >
> > > Also this patch remove hacks related to using donecbs's ->len field as a
> > > temporary variable to save the segmented callback list's length. This cannot be
> > > done anymore and is not needed.
> > >
> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > > ---
> > > include/linux/rcu_segcblist.h | 2 +
> > > kernel/rcu/rcu_segcblist.c | 133 +++++++++++++++++++++++-----------
> > > kernel/rcu/rcu_segcblist.h | 2 -
> > > 3 files changed, 92 insertions(+), 45 deletions(-)
> > >
> > > diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h
> > > index b36afe7b22c9..d462ae5e340a 100644
> > > --- a/include/linux/rcu_segcblist.h
> > > +++ b/include/linux/rcu_segcblist.h
> > > @@ -69,8 +69,10 @@ struct rcu_segcblist {
> > > unsigned long gp_seq[RCU_CBLIST_NSEGS];
> > > #ifdef CONFIG_RCU_NOCB_CPU
> > > atomic_long_t len;
> > > + atomic_long_t seglen[RCU_CBLIST_NSEGS];
> >
> > Also does it really need to be atomic?
>
> Yes, it need not be. I will make the change for ->seglen.
>
> BTW, for the existing ->len field, doesn't the following need to acquire nocb
> lock?
> rcu_nocb_try_bypass -> rcu_segcblist_inc_len
>
> It seems that will do a lock-less non-atomic RMW on a nocb offloaded list,
> otherwise.

I believe it shouldn't be necessary. That's an atomic add and the kthreads
manipulating it shouldn't have any trouble concurrently. None that I can
imagine tonight at least...

> Certainly rcu_nocb_do_flush_bypass() does do it so maybe it was missed?

I believe it increments under the lock here because the inc happens to be on the way
to the insertion of the callbacks :o)

\
 
 \ /
  Last update: 2020-10-21 23:55    [W:0.068 / U:0.708 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site