Messages in this thread | | | Date | Tue, 4 May 2021 09:38:18 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock |
| |
On Thu, Apr 29, 2021 at 01:39:54PM -0700, Josh Don wrote:
> > > +void double_rq_lock(struct rq *rq1, struct rq *rq2) > > > +{ > > > + lockdep_assert_irqs_disabled(); > > > + > > > + if (rq1->cpu > rq2->cpu) > > > > It's still a bit hard for me to digest this function, I guess using (rq->cpu) > > can't guarantee the sequence of locking when coresched is enabled. > > > > - cpu1 and cpu7 shares lockA > > - cpu2 and cpu8 shares lockB > > > > double_rq_lock(1,8) leads to lock(A) and lock(B) > > double_rq_lock(7,2) leads to lock(B) and lock(A)
Good one!
> > change to below to avoid ABBA? > > + if (__rq_lockp(rq1) > __rq_lockp(rq2))
This, however, is broken badly, not only does it suffer the problem Josh pointed out, it also breaks the rq->__lock ordering vs __sched_core_flip(), which was the whole reason the ordering needed changing in the first place.
> I'd propose an alternative but > similar idea: order by core, then break ties by ordering on cpu. > > +#ifdef CONFIG_SCHED_CORE > + if (rq1->core->cpu > rq2->core->cpu) > + swap(rq1, rq2); > + else if (rq1->core->cpu == rq2->core->cpu && rq1->cpu > rq2->cpu) > + swap(rq1, rq2); > +#else > if (rq1->cpu > rq2->cpu) > swap(rq1, rq2); > +#endif
I've written it like so:
static inline bool rq_order_less(struct rq *rq1, struct rq *rq2) { #ifdef CONFIG_SCHED_CORE if (rq1->core->cpu < rq2->core->cpu) return true; if (rq1->core->cpu > rq2->core->cpu) return false; #endif return rq1->cpu < rq2->cpu; }
/* * double_rq_lock - safely lock two runqueues */ void double_rq_lock(struct rq *rq1, struct rq *rq2) { lockdep_assert_irqs_disabled();
if (rq_order_less(rq2, rq1)) swap(rq1, rq2);
raw_spin_rq_lock(rq1); if (rq_lockp(rq1) == rq_lockp(rq2)) return;
raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING); }
| |