Messages in this thread |  | | Date | Tue, 16 Apr 2019 21:43:51 +0800 | From | Aaron Lu <> | Subject | Re: [RFC][PATCH 13/16] sched: Add core wide task selection and scheduling. |
| |
On Tue, Apr 02, 2019 at 10:28:12AM +0200, Peter Zijlstra wrote: > On Tue, Apr 02, 2019 at 02:46:13PM +0800, Aaron Lu wrote: ... > > Perhaps we can test if max is on the same cpu as class_pick and then > > use cpu_prio_less() or core_prio_less() accordingly here, or just > > replace core_prio_less(max, p) with cpu_prio_less(max, p) in > > pick_next_task(). The 2nd obviously breaks the comment of > > core_prio_less() though: /* cannot compare vruntime across CPUs */. > > Right, so as the comment states, you cannot directly compare vruntime > across CPUs, doing that is completely buggered. > > That also means that the cpu_prio_less(max, class_pick) in pick_task() > is buggered, because there is no saying @max is on this CPU to begin > with.
I find it difficult to decide which task of fair_sched_class having higher priority when the two tasks belong to different CPUs.
Please see below.
> Another approach would be something like the below: > > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -87,7 +87,7 @@ static inline int __task_prio(struct tas > */ > > /* real prio, less is less */ > -static inline bool __prio_less(struct task_struct *a, struct task_struct *b, bool runtime) > +static inline bool __prio_less(struct task_struct *a, struct task_struct *b, u64 vruntime) > { > int pa = __task_prio(a), pb = __task_prio(b); > > @@ -104,21 +104,25 @@ static inline bool __prio_less(struct ta > if (pa == -1) /* dl_prio() doesn't work because of stop_class above */ > return !dl_time_before(a->dl.deadline, b->dl.deadline); > > - if (pa == MAX_RT_PRIO + MAX_NICE && runtime) /* fair */ > - return !((s64)(a->se.vruntime - b->se.vruntime) < 0); > + if (pa == MAX_RT_PRIO + MAX_NICE) /* fair */ > + return !((s64)(a->se.vruntime - vruntime) < 0); > > return false; > } > > static inline bool cpu_prio_less(struct task_struct *a, struct task_struct *b) > { > - return __prio_less(a, b, true); > + return __prio_less(a, b, b->se.vruntime); > } > > static inline bool core_prio_less(struct task_struct *a, struct task_struct *b) > { > - /* cannot compare vruntime across CPUs */ > - return __prio_less(a, b, false); > + u64 vruntime = b->se.vruntime; > + > + vruntime -= task_rq(b)->cfs.min_vruntime; > + vruntime += task_rq(a)->cfs.min_vruntime
(I used task_cfs_rq() instead of task_rq() above.)
Consider the following scenario: (assume cpu0 and cpu1 are siblings of core0)
1 a cpu-intensive task belonging to cgroupA running on cpu0; 2 launch 'ls' from a shell(bash) which belongs to cgroupB; 3 'ls' blocked for a long time(if not forever).
Per my limited understanding: the launch of 'ls' cause bash to fork, then the newly forked process' vruntime will be 6ms(probably not precise) ahead of its cfs_rq due to START_DEBIT. Since there is no other running task on that cfs_rq, the cfs_rq's min_vruntime doesn't have a chance to get updated and the newly forked process will always have a distance of 6ms compared to its cfs_rq and it will always 'lose' to the cpu-intensive task belonging to cgroupA by core_prio_less().
No idea how to solve this...
> + > + return __prio_less(a, b, vruntime); > } > > static inline bool __sched_core_less(struct task_struct *a, struct task_struct *b)
|  |