lkml.org 
[lkml]   [2015]   [Mar]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2] sched, timer: Use atomics for thread_group_cputimer to improve scalability
From
Date
On Mon, 2015-03-02 at 20:43 +0100, Oleg Nesterov wrote:
> On 03/02, Oleg Nesterov wrote:
> >
> > Well, I forgot everything about this code, but let me ask anyway ;)
> >
> > On 03/02, Jason Low wrote:
> > >
> > > -static void update_gt_cputime(struct task_cputime *a, struct task_cputime *b)
> > > +static inline void __update_gt_cputime(atomic64_t *cputime, u64 sum_cputime)
> > > {
> > > - if (b->utime > a->utime)
> > > - a->utime = b->utime;
> > > -
> > > - if (b->stime > a->stime)
> > > - a->stime = b->stime;
> > > + u64 curr_cputime;
> > > + /*
> > > + * Set cputime to sum_cputime if sum_cputime > cputime. Use cmpxchg
> > > + * to avoid race conditions with concurrent updates to cputime.
> > > + */
> > > +retry:
> > > + curr_cputime = atomic64_read(cputime);
> > > + if (sum_cputime > curr_cputime) {
> > > + if (atomic64_cmpxchg(cputime, curr_cputime, sum_cputime) != curr_cputime)
> > > + goto retry;
> > > + }
> > > +}
> > >
> > > - if (b->sum_exec_runtime > a->sum_exec_runtime)
> > > - a->sum_exec_runtime = b->sum_exec_runtime;
> > > +static void update_gt_cputime(struct thread_group_cputimer *cputimer, struct task_cputime *sum)
> > > +{
> > > + __update_gt_cputime(&cputimer->utime, sum->utime);
> > > + __update_gt_cputime(&cputimer->stime, sum->stime);
> > > + __update_gt_cputime(&cputimer->sum_exec_runtime, sum->sum_exec_runtime);
> > > }
> >
> > And this is called if !cputimer_running().
> >
> > So who else can update these atomic64_t's ? The caller is called under ->siglock.
> > IOW, do we really need to cmpxchg/retry ?
> >
> > Just curious, I am sure I missed something.
>
> Ah, sorry, I seem to understand.
>
> We still can race with account_group_*time() even if ->running == 0. Because
> (say) account_group_exec_runtime() can race with 1 -> 0 -> 1 transition.
>
> Or is there another reason?

Hi Oleg,

Yes, that 1 -> 0 -> 1 transition was the race that I had in mind. Thus,
I added the extra atomic logic in update_gt_cputime() just to be safe.

In original code, we set cputimer->running first so it is running while
we call update_gt_cputime(). Now in this patch, we swapped the 2 calls
such that we set running after calling update_gt_cputime(), so that
wouldn't be an issue anymore.



\
 
 \ /
  Last update: 2015-03-02 22:41    [W:0.109 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site