lkml.org 
[lkml]   [2009]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] sched_rt: fix overload bug on rt group scheduling
Hi Peter,

Peter Zijlstra wrote:
> Fixes an easily triggerable BUG() when setting process affinities.
>
> Make sure to count the number of migratable tasks in the same place:
> the root rt_rq. Otherwise the number doesn't make sense and we'll hit
> the BUG in set_cpus_allowed_rt().
>
> Also, make sure we only count tasks, not groups (this is probably
> already taken care of by the fact that rt_se->nr_cpus_allowed will be 0
> for groups, but be more explicit)
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Tested-by: Thomas Gleixner <tglx@linutronix.de>
> CC: stable@kernel.org
> ---
> kernel/sched_rt.c | 16 +++++++++++++++-
> 1 files changed, 15 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
> index de4469a..c1ee8dc 100644
> --- a/kernel/sched_rt.c
> +++ b/kernel/sched_rt.c
> @@ -10,6 +10,8 @@ static inline struct task_struct *rt_task_of(struct sched_rt_entity *rt_se)
>
> #ifdef CONFIG_RT_GROUP_SCHED
>
> +#define rt_entity_is_task(rt_se) (!(rt_se)->my_q)
> +
> static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
> {
> return rt_rq->rq;
> @@ -22,6 +24,8 @@ static inline struct rt_rq *rt_rq_of_se(struct sched_rt_entity *rt_se)
>
> #else /* CONFIG_RT_GROUP_SCHED */
>
> +#define rt_entity_is_task(rt_se) (1)
> +
> static inline struct rq *rq_of_rt_rq(struct rt_rq *rt_rq)
> {
> return container_of(rt_rq, struct rq, rt);
> @@ -73,7 +77,7 @@ static inline void rt_clear_overload(struct rq *rq)
>
> static void update_rt_migration(struct rt_rq *rt_rq)
> {
> - if (rt_rq->rt_nr_migratory && (rt_rq->rt_nr_running > 1)) {
> + if (rt_rq->rt_nr_migratory > 1) {
>

The rest of the patch is making sense to me, but I am a little concerned
about this change.

The original logic was designed to catch the condition when you might
have a non-migratory task running, and a migratory task queued. This
would mean nr_running == 2, and nr_migratory == 1, which is eligible for
overload handling. (Of course, the opposite could be true..the
migratory is running and the non-migratory is queued...we cannot discern
the difference here and we go into overload anyway. This is just
suboptimal but functionally correct).

What can happen now is you could have that above condition but we will
not go into overload unless there is at least two migratory tasks
queued. This will undoubtedly allow a potential scheduling latency on
task #2.

I think we really need to qualify overload on both running > 1 and at
least one migratory task. Is there a way to get this state, even if by
other means?

> if (!rt_rq->overloaded) {
> rt_set_overload(rq_of_rt_rq(rt_rq));
> rt_rq->overloaded = 1;
> @@ -86,6 +90,11 @@ static void update_rt_migration(struct rt_rq *rt_rq)
>
> static void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
> {
> + if (!rt_entity_is_task(rt_se))
> + return;
> +
> + rt_rq = &rq_of_rt_rq(rt_rq)->rt;
> +
> if (rt_se->nr_cpus_allowed > 1)
> rt_rq->rt_nr_migratory++;
>
> @@ -94,6 +103,11 @@ static void inc_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
>
> static void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
> {
> + if (!rt_entity_is_task(rt_se))
> + return;
> +
> + rt_rq = &rq_of_rt_rq(rt_rq)->rt;
> +
> if (rt_se->nr_cpus_allowed > 1)
> rt_rq->rt_nr_migratory--;
>
>
>


[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2009-04-02 02:59    [W:0.152 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site