Messages in this thread | | | Date | Tue, 4 Jul 2017 11:41:41 +0200 | From | Peter Zijlstra <> | Subject | Re: [RFC][PATCH] sched: attach extra runtime to the right avg |
| |
On Sun, Jul 02, 2017 at 11:37:18AM +0200, Ingo Molnar wrote: > * josef@toxicpanda.com <josef@toxicpanda.com> wrote: > > > From: Josef Bacik <jbacik@fb.com> > > > > We only track the load avg of a se in 1024 ns chunks, so in order to > > make up for the loss of the < 1024 ns part of a run/sleep delta we only > > add the time we processed to the se->avg.last_update_time. The problem > > is there is no way to know if this extra time was while we were asleep > > or while we were running. Instead keep track of the remainder and apply > > it in the appropriate place. If the remainder was while we were > > running, add it to the delta the next time we update the load avg while > > running, and the same for sleeping. This (coupled with other fixes) > > mostly fixes the regression to my workload introduced by Peter's > > experimental runnable load propagation patches. > > > > Signed-off-by: Josef Bacik <jbacik@fb.com> > > > @@ -2897,12 +2904,16 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, > > * Use 1024ns as the unit of measurement since it's a reasonable > > * approximation of 1us and fast to compute. > > */ > > + remainder = delta & (1023UL); > > + sa->last_update_time = now; > > + if (running) > > + sa->run_remainder = remainder; > > + else > > + sa->sleep_remainder = remainder; > > delta >>= 10; > > if (!delta) > > return 0; > > > > - sa->last_update_time += delta << 10; > > - > > So I'm wondering, this chunk changes how sa->last_update_time is maintained in > ___update_load_avg(): the new code takes a precise timestamp, but the old code was > not taking an imprecise timestamp, but was updating it via deltas - where each > delta was rounded down to the nearest 1024 nsecs boundary.
Right..
> That, if this is the main code path that updates ->last_update_time, creates a > constant drift of rounding error that skews ->last_update_time into larger and > larger distances from the real 'now' - ever increasing the value of 'delta'.
Well, its a 0-sum. It doesn't drift unbounded. The difference will grow up to 1023, at which point we'll account for it whole and we're back to 0.
The problem is that there's two states: running, blocked. And the current scheme does not differentiate. We'll accrue the sub-block and spill it into whatever state gets lucky.
Now, on average you'd hope that that works out and both running and blocked get an equal number of spills pro-rata.
But apparently this isn't quite working out for Josef.
> An intermediate approach to improve that skew would be something like below. It > doesn't track the remainder like your patch does, but doesn't lose precision > either, just rounds down 'now' to the nearest 1024 boundary.
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 008c514dc241..b03703cd7989 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2965,7 +2965,7 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa, > if (!delta) > return 0; > > - sa->last_update_time += delta << 10; > + sa->last_update_time = now & ~1023ULL; >
So if we have a task that always runs <1024ns it should still get blocks of runtime because the difference between now and now&~1023 can be !0 and spill.
I'm just not immediately seeing how its different from the 0-sum we had. It should be identical since delta*1024 would equally land us on those same edges (there's an offset in the differential form between the two, but since we start with last_update_time=0, the resulting edges are the same afaict).
*confused*
| |