lkml.org 
[lkml]   [2020]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v7 2/6] rcu/segcblist: Add counters to segcblist datastructure
On Thu, Oct 15, 2020 at 02:21:58PM +0200, Frederic Weisbecker wrote:
> On Wed, Oct 14, 2020 at 08:22:57PM -0400, Joel Fernandes (Google) wrote:
> > Add counting of segment lengths of segmented callback list.
> >
> > This will be useful for a number of things such as knowing how big the
> > ready-to-execute segment have gotten. The immediate benefit is ability
> > to trace how the callbacks in the segmented callback list change.
> >
> > Also this patch remove hacks related to using donecbs's ->len field as a
> > temporary variable to save the segmented callback list's length. This cannot be
> > done anymore and is not needed.
> >
> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > ---
> > include/linux/rcu_segcblist.h | 2 +
> > kernel/rcu/rcu_segcblist.c | 133 +++++++++++++++++++++++-----------
> > kernel/rcu/rcu_segcblist.h | 2 -
> > 3 files changed, 92 insertions(+), 45 deletions(-)
> >
> > diff --git a/include/linux/rcu_segcblist.h b/include/linux/rcu_segcblist.h
> > index b36afe7b22c9..d462ae5e340a 100644
> > --- a/include/linux/rcu_segcblist.h
> > +++ b/include/linux/rcu_segcblist.h
> > @@ -69,8 +69,10 @@ struct rcu_segcblist {
> > unsigned long gp_seq[RCU_CBLIST_NSEGS];
> > #ifdef CONFIG_RCU_NOCB_CPU
> > atomic_long_t len;
> > + atomic_long_t seglen[RCU_CBLIST_NSEGS];
>
> Also does it really need to be atomic?

Yes, it need not be. I will make the change for ->seglen.

BTW, for the existing ->len field, doesn't the following need to acquire nocb
lock?
rcu_nocb_try_bypass -> rcu_segcblist_inc_len

It seems that will do a lock-less non-atomic RMW on a nocb offloaded list,
otherwise.

Certainly rcu_nocb_do_flush_bypass() does do it so maybe it was missed?

> > @@ -245,7 +280,7 @@ void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
> > struct rcu_head *rhp)
> > {
> > rcu_segcblist_inc_len(rsclp);
> > - smp_mb(); /* Ensure counts are updated before callback is enqueued. */
>
> That's a significant change that shouldn't be hidden and unexplained in an unrelated
> patch or it may be easily missed. I'd suggest to move this line together in
> "rcu/tree: Remove redundant smp_mb() in rcu_do_batch" (with the title updated perhaps)
> and maybe put it in the beginning of the series?

Will do as you suggest, makes sense.

> > + rcu_segcblist_inc_seglen(rsclp, RCU_NEXT_TAIL);
> > rhp->next = NULL;
> > WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp);
> > WRITE_ONCE(rsclp->tails[RCU_NEXT_TAIL], &rhp->next);
> [...]
> > @@ -330,11 +353,16 @@ void rcu_segcblist_extract_pend_cbs(struct rcu_segcblist *rsclp,
> >
> > if (!rcu_segcblist_pend_cbs(rsclp))
> > return; /* Nothing to do. */
> > + rclp->len = rcu_segcblist_get_seglen(rsclp, RCU_WAIT_TAIL) +
> > + rcu_segcblist_get_seglen(rsclp, RCU_NEXT_READY_TAIL) +
> > + rcu_segcblist_get_seglen(rsclp, RCU_NEXT_TAIL);
> > *rclp->tail = *rsclp->tails[RCU_DONE_TAIL];
> > rclp->tail = rsclp->tails[RCU_NEXT_TAIL];
> > WRITE_ONCE(*rsclp->tails[RCU_DONE_TAIL], NULL);
> > - for (i = RCU_DONE_TAIL + 1; i < RCU_CBLIST_NSEGS; i++)
> > + for (i = RCU_DONE_TAIL + 1; i < RCU_CBLIST_NSEGS; i++) {
> > WRITE_ONCE(rsclp->tails[i], rsclp->tails[RCU_DONE_TAIL]);
> > + rcu_segcblist_set_seglen(rsclp, i, 0);
> > + }
>
> So, that's probably just a matter of personal preference, so feel free to
> ignore but I'd rather do:
>
> rclp->len += rcu_segcblist_get_seglen(rsclp, i);
> rcu_segcblist_set_seglen(rsclp, i, 0);
>
> instead of the big addition above. That way, if a new index ever gets added/renamed
> to the segcblist, we'll take it into account. Also that spares a few lines.

Makes senes, will do.

thanks,

- Joel

\
 
 \ /
  Last update: 2020-10-21 17:34    [W:0.096 / U:1.944 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site