lkml.org 
[lkml]   [2010]   [Oct]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 2/7] sched: accumulate per-cfs_rq cpu usage
From
Date
On Tue, 2010-10-12 at 13:21 +0530, Bharata B Rao wrote:
> +static u64 tg_request_cfs_quota(struct task_group *tg)
> +{
> + struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg);
> + u64 delta = 0;
> +
> + if (cfs_b->runtime > 0 || cfs_b->quota == RUNTIME_INF) {
> + raw_spin_lock(&cfs_b->lock);
> + /*
> + * it's possible a bandwidth update has changed the global
> + * pool.
> + */
> + if (cfs_b->quota == RUNTIME_INF)
> + delta = sched_cfs_bandwidth_slice();
> + else {
> + delta = min(cfs_b->runtime,
> + sched_cfs_bandwidth_slice());
> + cfs_b->runtime -= delta;
> + }
> + raw_spin_unlock(&cfs_b->lock);
> + }
> + return delta;
> +}

Since you check cfs_b->quote outside of cfs_b->lock anyway, you might as
well avoid taking the lock in that case and directly return slice.

Also, you possibly evaluate sched_cfs_bandwidth_slice() twice.


\
 
 \ /
  Last update: 2010-10-14 11:01    [W:0.168 / U:0.688 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site