Messages in this thread | | | Subject | Re: [patch 00/17] CFS Bandwidth Control v7.1 | From | Peter Zijlstra <> | Date | Thu, 07 Jul 2011 16:38:48 +0200 |
| |
On Thu, 2011-07-07 at 13:23 +0200, Ingo Molnar wrote: > > The +1.5% increase in vanilla kernel context switching performance is > unfortunate - where does that overhead come from?
Looking at the asm output, I think its partly because things like:
@@ -602,6 +618,8 @@ static void update_curr(struct cfs_rq *c cpuacct_charge(curtask, delta_exec); account_group_exec_runtime(curtask, delta_exec); } + + account_cfs_rq_runtime(cfs_rq, delta_exec); }
+static void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, + unsigned long delta_exec) +{ + if (!cfs_rq->runtime_enabled) + return; + + cfs_rq->runtime_remaining -= delta_exec; + if (cfs_rq->runtime_remaining > 0) + return; + + assign_cfs_rq_runtime(cfs_rq); +}
generate a call, only to then take the first branch out, marking that function __always_inline would cure the call problem. Going beyond that would be using static_branch() to track if there is any bandwidth tracking required at all.
| |