lkml.org 
[lkml]   [2008]   [Apr]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: posix-cpu-timers revamp
Date
My explanation about the constraints on exit_state was specifically about
the context of update_process_times(), which is part of the path for a
clock tick interrupting current.

> And, you guessed it, the invariant gets violated. Apparently the passed
> task_struct isn't the same as "current" at this point.

The scheduler code has gotten a lot more complex since I first implemented
posix-cpu-timers, and I've never been any expert on the scheduler at all.
But I'm moderately sure all those things are all involved in context
switch where the task of interest is about to be on a CPU or just was on a
CPU. I doubt those are places where the task in question could be
simultaneously executing in exit_notify() on another CPU. But we'd need
to ask the scheduler experts to be sure we know what we're talking about
there.

> Found the exception. do_fork() violates the invariant when it's
> cranking up a new process. Hmmm.

I haven't figured out what actual code path this refers to.


This sort of concern is among the reasons that checking ->signal was the
course I found wise to suggest to begin with. We can figure out what the
constraints on ->exit_state are in all the places by understanding every
corner of the scheduler. We can measure whether it winds up being in a
cooler cache line than ->signal and a net loss to add the load, or has
superior performance as you seem to think. Or we can just test the
constraint that matters, whether the pointer we loaded was in fact null,
and rely on RCU to make it not matter if there is a race after that load.
It doesn't matter whether tsk is current or not, it only matters that we
have the pointer and that we're using some CPU array slot or other that
noone else is using simultaneously.

static inline void account_group_exec_runtime(struct task_struct *tsk,
unsigned long long runtime)
{
struct signal_struct *sig;
struct task_cputime *times;

rcu_read_lock();
sig = rcu_dereference(tsk->signal);
if (likely(sig) && sig->cputime.totals) {
times = per_cpu_ptr(sig->cputime.totals, get_cpu());
times->sum_exec_runtime += runtime;
put_cpu_no_resched();
}
rcu_read_unlock();
}


Thanks,
Roland


\
 
 \ /
  Last update: 2008-04-09 00:51    [W:0.174 / U:0.644 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site