lkml.org 
[lkml]   [2019]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH V2 6/7] rcu: clear the special.b.need_qs in rcu_note_context_switch()
On Sat, Nov 02, 2019 at 12:45:58PM +0000, Lai Jiangshan wrote:
> It is better to clear the special.b.need_qs when it is
> replaced by special.b.blocked or it is really fulfill its
> goal in rcu_preempt_deferred_qs_irqrestore().
>
> It makes rcu_qs() easier to be understood, and also prepares for
> the percpu rcu_preempt_depth patch, which reqires rcu_special
> bits to be clearred in irq-disable context.
>
> Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>

This one is a (possible) optimization.

Right now, when the CPU actually passes through the quiescent state,
we clear this field. The quiescent state is not reported until later.
Waiting to clear it until later might cause extra unneeded quiescent-state
work to happen. But your point is that the current code might leave
->rcu_read_unlock_special momentarily zero, causing possible trouble
with the remainder of this series, right?

Hmmm...

The "right" way to do this would be to have another flag saying
"quiescent state encountered but not yet reported". This would keep
->rcu_read_unlock_special non-zero throughout, and would avoid useless
work looking for additional unneeded quiescent states. Or perhaps have
need_qs be zero for don't need, one for need, and two for have but not
yet reported.

Thoughts? Other approaches?

Thanx, Paul

> ---
> kernel/rcu/tree_plugin.h | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index eb5906c55c8d..e16c3867d2ff 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -264,8 +264,6 @@ static void rcu_qs(void)
> __this_cpu_read(rcu_data.gp_seq),
> TPS("cpuqs"));
> __this_cpu_write(rcu_data.cpu_no_qs.b.norm, false);
> - barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */
> - WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, false);
> }
> }
>
> @@ -297,6 +295,7 @@ void rcu_note_context_switch(bool preempt)
> /* Possibly blocking in an RCU read-side critical section. */
> rnp = rdp->mynode;
> raw_spin_lock_rcu_node(rnp);
> + t->rcu_read_unlock_special.b.need_qs = false;
> t->rcu_read_unlock_special.b.blocked = true;
> t->rcu_blocked_node = rnp;
>
> --
> 2.20.1
>

\
 
 \ /
  Last update: 2019-11-16 17:17    [W:1.422 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site