lkml.org 
[lkml]   [2010]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/5] thread_group_cputime: simplify, document the "alive" check
On 06/11, Stanislaw Gruszka wrote:
>
> On Fri, Jun 11, 2010 at 05:15:33PM +0200, Oleg Nesterov wrote:
> > On 06/11, Stanislaw Gruszka wrote:
> > >
> > > On Fri, Jun 11, 2010 at 01:09:56AM +0200, Oleg Nesterov wrote:
> > > > thread_group_cputime() looks as if it is rcu-safe, but in fact this
> > > > was wrong until ea6d290c which pins task->signal to task_struct.
> > > > It checks ->sighand != NULL under rcu, but this can't help if ->signal
> > > > can go away. Fortunately the caller either holds ->siglock, or it is
> > > > fastpath_timer_check() which uses current and checks exit_state == 0.
> > >
> > > Hmm, I thought we avoided calling thread_group_cputime() from
> > > fastpatch_timer_check(), but seems it is still possible when we
> > > call run_posix_cpu_timers() on two different cpus simultaneously ...
> >
> > No, we can't. thread_group_cputimer() does test-and-set ->running
> > under cputimer->lock.
> >
> > But when I sent these patches, I realized we have another race here
> > (with or without these patches). I am already doing the fix.
>
> Don't know what you catch, I was thinking about:
>
> cpu0 cpu1
>
> fastpath_timer_check():
>
> if (sig->cputimer.running) {
> struct task_cputime group_sample;
> stop_process_timers():
>
> spin_lock_irqsave(&cputimer->lock, flags);
> cputimer->running = 0;
> spin_unlock_irqrestore(&cputimer->lock, flags);
>
> thread_group_cputimer(tsk, &group_sample);

Yes, I was thinking about this race too. Please wait a bit, I'll send
the patch.

In short: it is safe to call thread_group_cputime() lockless, but
thread_group_cputimer() must not be called without siglock/tasklist
(oh, and imho we should rename them somehow, their names are almost
identical). And in fact fastpath_timer_check() does not need
thread_group_cputimer().

> > > > - Since ea6d290c commit tsk->signal is stable, we can read it first
> > > > and avoid the initialization from INIT_CPUTIME.
> > > >
> > > > - Even if tsk->signal is always valid, we still have to check it
> > > > is safe to use next_thread() under rcu_read_lock(). Currently
> > > > the code checks ->sighand != NULL, change it to use pid_alive()
> > > > which is commonly used to ensure the task wasn't unhashed before
> > > > we take rcu_read_lock().
> > >
> > > I'm not sure how important are values of almost dead task, but
> > > perhaps would be better to return times form all threads
> > > using as base sig->curr_target in loop.
> >
> > Could you clarify?
>
> Avoid pid_alive check and loop starting from sig->curr_target:
>
> t = tsk = sig->curr_target;
> do {
> times->utime = cputime_add(times->utime, t->utime);
> times->stime = cputime_add(times->stime, t->stime);
> times->sum_exec_runtime += t->se.sum_exec_runtime;
> } while_each_thread(tsk, t);
>
> I don't know what are rules regarding accessing sig->curr_target, but
> if this is done under sighand->siglock we should be safe. Question
> if if we always have lock taken, we tried to assure that in the past,
> but if we really do?

Ah, you are talking about thread_group_cputime().

Without ->siglock this is not safe. We can change __exit_signal() to
nullify ->curr_target in the group_dead case, then the code above
could check sig->curr_target != NULL.

But this is too subtle imho, and not needed. Instead we should move
group_leader into ->signal (and kill signal->leader_pid). I am going
to do more cleanups in this area "later".


Anyway. This all has nothing to do with this patch. The 4/5 change
in thread_group_cputime() is cleanup, and it ccan help to make
/proc/pid/stat /proc/pid/status lockless.

With or without 5/5 thread_group_cputime() can be called lockless
and race with exit/fork. This is fine by itself, but this is wrong
because the caller sets ->running.

Oleg.



\
 
 \ /
  Last update: 2010-06-11 19:01    [W:0.108 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site