Messages in this thread | | | From | Josh Don <> | Date | Mon, 26 Apr 2021 15:21:36 -0700 | Subject | Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock |
| |
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index f732642e3e09..1a81e9cc9e5d 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -290,6 +290,10 @@ static void sched_core_assert_empty(void) > static void __sched_core_enable(void) > { > static_branch_enable(&__sched_core_enabled); > + /* > + * Ensure raw_spin_rq_*lock*() have completed before flipping. > + */ > + synchronize_sched();
synchronize_rcu()
> __sched_core_flip(true); > sched_core_assert_empty(); > } > @@ -449,16 +453,22 @@ void raw_spin_rq_lock_nested(struct rq *rq, int subclass) > { > raw_spinlock_t *lock; > > + preempt_disable(); > if (sched_core_disabled()) { > raw_spin_lock_nested(&rq->__lock, subclass); > + /* preempt *MUST* still be disabled here */ > + preempt_enable_no_resched(); > return; > }
This approach looks good to me. I'm guessing you went this route instead of doing the re-check after locking in order to optimize the disabled case?
Recommend a comment that the preempt_disable() here pairs with the synchronize_rcu() in __sched_core_enable().
> > for (;;) { > lock = __rq_lockp(rq); > raw_spin_lock_nested(lock, subclass); > - if (likely(lock == __rq_lockp(rq))) > + if (likely(lock == __rq_lockp(rq))) { > + /* preempt *MUST* still be disabled here */ > + preempt_enable_no_resched(); > return; > + } > raw_spin_unlock(lock); > } > }
| |