lkml.org 
[lkml]   [2015]   [Jul]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC tip/core/rcu 05/14] rcu: Abstract sequence counting from synchronize_sched_expedited()
On Tue, Jun 30, 2015 at 03:25:45PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>
> This commit creates rcu_exp_gp_seq_start() and rcu_exp_gp_seq_end() to
> bracket an expedited grace period, rcu_exp_gp_seq_snap() to snapshot the
> sequence counter, and rcu_exp_gp_seq_done() to check to see if a full
> expedited grace period has elapsed since the snapshot. These will be
> applied to synchronize_rcu_expedited(). These are defined in terms of
> underlying rcu_seq_start(), rcu_seq_end(), rcu_seq_snap(), rcu_seq_done(),
> which will be applied to _rcu_barrier().

It would be good to explain why you cannot use seqcount primitives.
They're >.< close.

> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
> kernel/rcu/tree.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 58 insertions(+), 10 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index c58fd27b4a22..f96500e462fd 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -3307,6 +3307,60 @@ void cond_synchronize_sched(unsigned long oldstate)
> }
> EXPORT_SYMBOL_GPL(cond_synchronize_sched);
>
> +/* Adjust sequence number for start of update-side operation. */
> +static void rcu_seq_start(unsigned long *sp)
> +{
> + WRITE_ONCE(*sp, *sp + 1);
> + smp_mb(); /* Ensure update-side operation after counter increment. */
> + WARN_ON_ONCE(!(*sp & 0x1));
> +}

That wants to be an ACQUIRE, right?

> +
> +/* Adjust sequence number for end of update-side operation. */
> +static void rcu_seq_end(unsigned long *sp)
> +{
> + smp_mb(); /* Ensure update-side operation before counter increment. */

And that wants to be a RELEASE, right?

> + WRITE_ONCE(*sp, *sp + 1);

smp_store_release();

even if balanced against a full barrier, might be better here?

> + WARN_ON_ONCE(*sp & 0x1);
> +}

And the only difference between these and
raw_write_seqcount_{begin,end}() is the smp_wmb() vs your smp_mb().

Since seqcounts have a distinct read vs writer side, we really only care
about limiting the stores. I suspect you really do care about reads
between these 'sequence points'. A few words to that effect could
explain the existence of these primitives.

> +/* Take a snapshot of the update side's sequence number. */
> +static unsigned long rcu_seq_snap(unsigned long *sp)
> +{
> + unsigned long s;
> +
> + smp_mb(); /* Caller's modifications seen first by other CPUs. */
> + s = (READ_ONCE(*sp) + 3) & ~0x1;
> + smp_mb(); /* Above access must not bleed into critical section. */

smp_load_acquire() then?

> + return s;
> +}
> +
> +/*
> + * Given a snapshot from rcu_seq_snap(), determine whether or not a
> + * full update-side operation has occurred.
> + */
> +static bool rcu_seq_done(unsigned long *sp, unsigned long s)
> +{
> + return ULONG_CMP_GE(READ_ONCE(*sp), s);

I'm always amused you're not wanting to rely on 2s complement for
integer overflow. I _know_ its undefined behaviour in the C rule book,
but the entire rest of the kernel hard assumes it.

> +}
> +
> +/* Wrapper functions for expedited grace periods. */
> +static void rcu_exp_gp_seq_start(struct rcu_state *rsp)
> +{
> + rcu_seq_start(&rsp->expedited_sequence);
> +}
> +static void rcu_exp_gp_seq_end(struct rcu_state *rsp)
> +{
> + rcu_seq_end(&rsp->expedited_sequence);
> +}
> +static unsigned long rcu_exp_gp_seq_snap(struct rcu_state *rsp)
> +{
> + return rcu_seq_snap(&rsp->expedited_sequence);
> +}
> +static bool rcu_exp_gp_seq_done(struct rcu_state *rsp, unsigned long s)
> +{
> + return rcu_seq_done(&rsp->expedited_sequence, s);
> +}

This is wrappers for wrappers sake? Why?


\
 
 \ /
  Last update: 2015-07-01 12:41    [W:0.526 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site