lkml.org 
[lkml]   [2016]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH tip/core/rcu 6/7] rcu: Make expedited grace periods recheck dyntick idle state
On Mon, Nov 14, 2016 at 08:57:12AM -0800, Paul E. McKenney wrote:
> Expedited grace periods check dyntick-idle state, and avoid sending
> IPIs to idle CPUs, including those running guest OSes, and, on NOHZ_FULL
> kernels, nohz_full CPUs. However, the kernel has been observed checking
> a CPU while it was non-idle, but sending the IPI after it has gone
> idle. This commit therefore rechecks idle state immediately before
> sending the IPI, refraining from IPIing CPUs that have since gone idle.
>
> Reported-by: Rik van Riel <riel@redhat.com>
> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

atomic_add_return(0, ...) seems odd. Do you actually want that, rather
than atomic_read(...)? If so, can you please document exactly why?

> kernel/rcu/tree.h | 1 +
> kernel/rcu/tree_exp.h | 12 +++++++++++-
> 2 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index e99a5234d9ed..fe98dd24adf8 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -404,6 +404,7 @@ struct rcu_data {
> atomic_long_t exp_workdone1; /* # done by others #1. */
> atomic_long_t exp_workdone2; /* # done by others #2. */
> atomic_long_t exp_workdone3; /* # done by others #3. */
> + int exp_dynticks_snap; /* Double-check need for IPI. */
>
> /* 7) Callback offloading. */
> #ifdef CONFIG_RCU_NOCB_CPU
> diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> index 24343eb87b58..d3053e99fdb6 100644
> --- a/kernel/rcu/tree_exp.h
> +++ b/kernel/rcu/tree_exp.h
> @@ -358,8 +358,10 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
> struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
>
> + rdp->exp_dynticks_snap =
> + atomic_add_return(0, &rdtp->dynticks);
> if (raw_smp_processor_id() == cpu ||
> - !(atomic_add_return(0, &rdtp->dynticks) & 0x1) ||
> + !(rdp->exp_dynticks_snap & 0x1) ||
> !(rnp->qsmaskinitnext & rdp->grpmask))
> mask_ofl_test |= rdp->grpmask;
> }
> @@ -377,9 +379,17 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> /* IPI the remaining CPUs for expedited quiescent state. */
> for_each_leaf_node_possible_cpu(rnp, cpu) {
> unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
> + struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
> + struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
> +
> if (!(mask_ofl_ipi & mask))
> continue;
> retry_ipi:
> + if (atomic_add_return(0, &rdtp->dynticks) !=
> + rdp->exp_dynticks_snap) {
> + mask_ofl_test |= mask;
> + continue;
> + }
> ret = smp_call_function_single(cpu, func, rsp, 0);
> if (!ret) {
> mask_ofl_ipi &= ~mask;
> --
> 2.5.2
>

\
 
 \ /
  Last update: 2016-11-14 18:25    [W:0.180 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site