Messages in this thread | | | Date | Tue, 26 Feb 2008 14:20:44 +0100 | From | "J.C. Pizarro" <> | Subject | Re: Please, put 64-bit counter per task and incr.by.one each ctxt switch. |
| |
On 2008/2/25, Andrew Morton <akpm@linux-foundation.org> wrote: > On Sun, 24 Feb 2008 14:12:47 +0100 "J.C. Pizarro" <jcpiza@gmail.com> wrote: > > > It's statistic, yes, but it's a very important parameter for the CPU-scheduler. > > The CPU-scheduler will know the number of context switches of each task > > before of to take a blind decision into infinitum!. > > > We already have these: > > unsigned long nvcsw, nivcsw; /* context switch counts */ > > in the task_struct.
1. They use "unsigned long" instead "unsigned long long". 2. They use "= 0;" instead of "= 0ULL"; 3. They don't use ++ (incr. by one per ctxt-switch). 4. I don't like the separation of voluntary and involuntary ctxt-switches, and i don't understand the utility of this separation.
The tsk->nvcsw & tsk->nivcsw mean different to i had proposed.
It's simple, when calling to function kernel/sched.c:context_switch(..) to do ++, but they don't do it.
I propose you 1. unsigned long long tsk->ncsw = 0ULL; and tsk->ncsw++; 2. unsigned long long tsk->last_registered_ncsw = tsk->ncsw; when it's polling. 3. long tsk->vcsw = ( tsk->ncsw - tsk->last_registered_ncsw ) / ( t2 - t1 ) /* velocity of task (ctxt-switches per second), (t1 != t2 in seconds for no zerodiv) 4. long tsk->last_registered_vcsw = tsk->vcsw; 5. long tsk->normalized_vcsw = (1 - alpha)*tsk->last_registered_vcsw + alpha*tsk->vcsw; /* 0<alpha<1 */
Sincerely yours ;)
| |