lkml.org 
[lkml]   [2014]   [Jul]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] sched: Rewrite per entity runnable load average tracking
Thanks, Peter.

On Wed, Jul 09, 2014 at 08:45:43PM +0200, Peter Zijlstra wrote:

> Nope :-).. we got rid of that lock for a good reason.
>
> Also, this is one area where I feel performance really trumps
> correctness, we can fudge the blocked load a little. So the
> sched_clock_cpu() difference is a strict upper bound on the
> rq_clock_task() difference (and under 'normal' circumstances shouldn't
> be much off).

Strictly, migrating wakee task on remote CPU entails two steps:

(1) Catch up with task's queue's last_update_time, and then substract

(2) Cache up with "current" time of remote CPU (for comparable matter), and then
on new CPU, change to the new timing source (when enqueue)

So I will try sched_clock_cpu(remote_cpu) for step (2). For step (2), maybe we
should not use cfs_rq_clock_task anyway, since the task is about to going
to another CPU/queue. Is this right?

I made another mistake. Should not only track task entity load, group entity
(as an entity) is also needed. Otherwise, task_h_load can't be done correctly...
Sorry for the messup. But this won't make much change in the codes.

Thanks,
Yuyang

> So we could simply use a timestamps from dequeue and one from enqueue,
> and use that.
>
> As to the remote subtraction, a RMW on another cacheline than the
> rq->lock one should be good; esp since we don't actually observe the
> per-rq total often (once per tick or so) I think, no?
>
> The thing is, we do not want to disturb scheduling on whatever cpu the
> task last ran on if we wake it to another cpu. Taking rq->lock wrecks
> that for sure.




\
 
 \ /
  Last update: 2014-07-10 10:01    [W:0.455 / U:0.812 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site