Messages in this thread Patch in this message | | | Subject | [PATCH] sched: rt-bandwidth disable fixes | From | Peter Zijlstra <> | Date | Mon, 18 Aug 2008 12:47:09 +0200 |
| |
On Mon, 2008-08-18 at 00:15 +0200, Dario Faggioli wrote: > On Sat, 2008-08-16 at 23:29 +0200, Stefani Seibold wrote: > > After disabling kernel support for "Group CPU scheduler" and applying > > 'echo -1 > /proc/sys/kernel/sched_rt_runtime_us' the behaviour is as > > expected.
> > So the problem is located first in the new sched_rt_runtime_us default > > value and second in the "Group CPU scheduler". > Well, if you have group scheduling configured I think you should do both > # echo -1 > /proc/sys/kernel/sched_rt_runtime_us > # echo -1 > /dev/cgroup/cpu.rt_runtime_us > > if /dev/cgroup is the mount point of the cgroup file system. > > In situations like the one you are describing, this worked for me... > Hope that it also helps you! :-)
Ah, right - I knew I was forgetting something..
(compile tested only)
--- Subject: sched: rt-bandwidth disable fixes From: Peter Zijlstra <a.p.zijlstra@chello.nl> Date: Mon Aug 18 12:39:07 CEST 2008
Currently there is no way to revert to the classical behaviour if RT_GROUP_SCHED is set. Fix this by introducing rt_bandwidth_enabled(), which will turn off all the bandwidth accounting if sched_rt_runtime_us is set to a negative value.
Also fix a bug where we would still increase the used time when the limit would be set to RUNTIME_INF - causing a long throttle period once it would be lowered.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> --- kernel/sched.c | 9 ++++++++- kernel/sched_rt.c | 16 +++++++++------- 2 files changed, 17 insertions(+), 8 deletions(-)
Index: linux-2.6/kernel/sched.c =================================================================== --- linux-2.6.orig/kernel/sched.c +++ linux-2.6/kernel/sched.c @@ -204,11 +204,13 @@ void init_rt_bandwidth(struct rt_bandwid rt_b->rt_period_timer.cb_mode = HRTIMER_CB_IRQSAFE_NO_SOFTIRQ; } +static inline int rt_bandwidth_enabled(void); + static void start_rt_bandwidth(struct rt_bandwidth *rt_b) { ktime_t now; - if (rt_b->rt_runtime == RUNTIME_INF) + if (rt_bandwidth_enabled() && rt_b->rt_runtime == RUNTIME_INF) return; if (hrtimer_active(&rt_b->rt_period_timer)) @@ -839,6 +841,11 @@ static inline u64 global_rt_runtime(void return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC; } +static inline int rt_bandwidth_enabled(void) +{ + return sysctl_sched_rt_runtime >= 0; +} + #ifndef prepare_arch_switch # define prepare_arch_switch(next) do { } while (0) #endif Index: linux-2.6/kernel/sched_rt.c =================================================================== --- linux-2.6.orig/kernel/sched_rt.c +++ linux-2.6/kernel/sched_rt.c @@ -386,7 +386,7 @@ static int do_sched_rt_period_timer(stru int i, idle = 1; cpumask_t span; - if (rt_b->rt_runtime == RUNTIME_INF) + if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF) return 1; span = sched_rt_period_mask(); @@ -438,9 +438,6 @@ static int sched_rt_runtime_exceeded(str { u64 runtime = sched_rt_runtime(rt_rq); - if (runtime == RUNTIME_INF) - return 0; - if (rt_rq->rt_throttled) return rt_rq_throttled(rt_rq); @@ -487,13 +484,18 @@ static void update_curr_rt(struct rq *rq curr->se.exec_start = rq->clock; cpuacct_charge(curr, delta_exec); + if (!rt_bandwidth_enabled()) + return; + for_each_sched_rt_entity(rt_se) { rt_rq = rt_rq_of_se(rt_se); spin_lock(&rt_rq->rt_runtime_lock); - rt_rq->rt_time += delta_exec; - if (sched_rt_runtime_exceeded(rt_rq)) - resched_task(curr); + if (sched_rt_runtime(rt_rq) != RUNTIME_INF) { + rt_rq->rt_time += delta_exec; + if (sched_rt_runtime_exceeded(rt_rq)) + resched_task(curr); + } spin_unlock(&rt_rq->rt_runtime_lock); } }
| |