Messages in this thread | | | Date | Tue, 12 Apr 2016 03:41:41 +0800 | From | Yuyang Du <> | Subject | Re: [PATCH 2/4] sched/fair: Drop out incomplete current period when sched averages accrue |
| |
Hi Vincent,
On Mon, Apr 11, 2016 at 11:08:04AM +0200, Vincent Guittot wrote: > > @@ -2704,11 +2694,14 @@ static __always_inline int > > __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > > unsigned long weight, int running, struct cfs_rq *cfs_rq) > > { > > - u64 delta, scaled_delta, periods; > > - u32 contrib; > > - unsigned int delta_w, scaled_delta_w, decayed = 0; > > + u64 delta; > > + u32 contrib, periods; > > unsigned long scale_freq, scale_cpu; > > > > + /* > > + * now rolls down to a period boundary > > + */ > > + now = now && (u64)(~0xFFFFF); > > delta = now - sa->last_update_time; > > /* > > * This should only happen when time goes backwards, which it > > @@ -2720,89 +2713,56 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > > } > > > > /* > > - * Use 1024ns as the unit of measurement since it's a reasonable > > - * approximation of 1us and fast to compute. > > + * Use 1024*1024ns as an approximation of 1ms period, pretty close. > > */ > > - delta >>= 10; > > - if (!delta) > > + periods = delta >> 20; > > + if (!periods) > > return 0; > > sa->last_update_time = now; > > The optimization looks quite interesting but I see one potential issue > with migration as we will lose the part of the ongoing period that is > now not saved anymore. This lost part can be quite significant for a > short task that ping pongs between CPUs.
Yes, basically, it is we lose precision (~1ms scale in contrast with ~1us scale). But as I wrote, we may either lose a sub-1ms, or gain a sub-1ms, statistically, they should even out, given the load/util updates are quite a large number of samples, and we do want a lot of samples for the metrics, this is the point of the entire average thing. Plus, as you also said, the incomplete current period also plays a (somewhat) negative role here.
In addition, excluding the flat hierarchical util patch, we gain quite some efficiency.
| |