lkml.org 
[lkml]   [2015]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH 2/2] nohz: make nohz_full imply isolcpus
From
Date
On Sat, 2015-04-04 at 04:03 +0200, Mike Galbraith wrote:
> On Fri, 2015-04-03 at 15:21 -0400, Chris Metcalf wrote:
> > On 04/03/2015 02:08 PM, Mike Galbraith wrote:
> > > On Fri, 2015-04-03 at 12:24 -0400, cmetcalf@ezchip.comwrote:
> > > > From: Chris Metcalf <cmetcalf@ezchip.com>
> > > >
> > > > It's not clear that nohz_full is useful without isolcpus also
> > > > set, since otherwise the scheduler has to run periodically to
> > > > try to determine whether to steal work from other cores.
> > > >
> > > > Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
> > > Ack! nohz_full= as currently defined makes zero sense when the
> > > cpu
> > > set (which should be spelled cpuset) remains connected to the
> > > scheduler. Perturbation of tasks to PREVENT cpu domination is
> > > what
> > > the scheduler does for a living. Sprinkling microsecond savers
> > > all
> > > over the kernel is pretty silly if you don't shut down the
> > > mother
> > > lode
> > > of perturbation.
> >
> > Sounds like a thumbs up for this patch, then? :-)
>
> Yup. The other thumb turns in the up direction when folks start
> spelling cpuset properly ;-) Static isolcpus was supposed to go
> away.

Speaking of microsecond savers, the (ick) deferment experiment below
cut 60 core jitter in half. Shooting the clocksource watchdog fixes
alternating ~15us/~5us tick on my desktop box.

With workqueue twiddles and whatnot floating around, the thing is
starting to look viable.

---
kernel/sched/core.c | 5 +++--
kernel/time/clocksource.c | 5 +++++
2 files changed, 8 insertions(+), 2 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2604,12 +2604,13 @@ u64 scheduler_tick_max_deferment(void)
struct rq *rq = this_rq();
unsigned long next, now = ACCESS_ONCE(jiffies);

- next = rq->last_sched_tick + HZ;
+ next = (rq->last_sched_tick + HZ) | (rq->clock & 0x3f);

if (time_before_eq(next, now))
return 0;

- return jiffies_to_nsecs(next - now);
+ /* Add noise to avoid CPUs colliding at tick boundaries */
+ return jiffies_to_nsecs(next - now) | (rq->clock & 0xfffff);
}
#endif

--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -267,8 +267,13 @@ static void clocksource_watchdog(unsigne
* to each other.
*/
next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);
+skip_nohz_full:
if (next_cpu >= nr_cpu_ids)
next_cpu = cpumask_first(cpu_online_mask);
+ if (next_cpu && tick_nohz_full_cpu(next_cpu)) {
+ next_cpu = cpumask_next(next_cpu, cpu_online_mask);
+ goto skip_nohz_full;
+ }
watchdog_timer.expires += WATCHDOG_INTERVAL;
add_timer_on(&watchdog_timer, next_cpu);
out:

\
 
 \ /
  Last update: 2015-04-04 06:21    [W:0.131 / U:1.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site