lkml.org 
[lkml]   [2011]   [Feb]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [CFS Bandwidth Control v4 1/7] sched: introduce primitives to account for CFS bandwidth tracking
    On Wed, Feb 16, 2011 at 10:22:16PM +0530, Balbir Singh wrote:
    > * Paul Turner <pjt@google.com> [2011-02-15 19:18:32]:
    >
    > > In this patch we introduce the notion of CFS bandwidth, to account for the
    > > realities of SMP this is partitioned into globally unassigned bandwidth, and
    > > locally claimed bandwidth:
    > > - The global bandwidth is per task_group, it represents a pool of unclaimed
    > > bandwidth that cfs_rq's can allocate from. It uses the new cfs_bandwidth
    > > structure.
    > > - The local bandwidth is tracked per-cfs_rq, this represents allotments from
    > > the global pool
    > > bandwidth assigned to a task_group, this is tracked using the
    > > new cfs_bandwidth structure.
    > >
    > > Bandwidth is managed via cgroupfs via two new files in the cpu subsystem:
    > > - cpu.cfs_period_us : the bandwidth period in usecs
    > > - cpu.cfs_quota_us : the cpu bandwidth (in usecs) that this tg will be allowed
    > > to consume over period above.
    > >
    > > A per-cfs_bandwidth timer is also introduced to handle future refresh at
    > > period expiration. There's some minor refactoring here so that
    > > start_bandwidth_timer() functionality can be shared
    > >
    > > Signed-off-by: Paul Turner <pjt@google.com>
    > > Signed-off-by: Nikhil Rao <ncrao@google.com>
    > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
    > > ---
    >
    > Looks good, minor nits below
    >
    >
    > Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>

    Thanks Balbir.

    > > +
    > > +static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
    > > +{
    > > + struct cfs_bandwidth *cfs_b =
    > > + container_of(timer, struct cfs_bandwidth, period_timer);
    > > + ktime_t now;
    > > + int overrun;
    > > + int idle = 0;
    > > +
    > > + for (;;) {
    > > + now = hrtimer_cb_get_time(timer);
    > > + overrun = hrtimer_forward(timer, now, cfs_b->period);
    > > +
    > > + if (!overrun)
    > > + break;
    > > +
    > > + idle = do_sched_cfs_period_timer(cfs_b, overrun);
    >
    > This patch just sets up to return do_sched_cfs_period_timer to return
    > 1. I am afraid I don't understand why this function is introduced
    > here.

    Answered this during last post: http://lkml.org/lkml/2010/10/14/31

    > > +
    > > + mutex_lock(&mutex);
    > > + raw_spin_lock_irq(&tg->cfs_bandwidth.lock);
    > > + tg->cfs_bandwidth.period = ns_to_ktime(period);
    > > + tg->cfs_bandwidth.runtime = tg->cfs_bandwidth.quota = quota;
    > > + raw_spin_unlock_irq(&tg->cfs_bandwidth.lock);
    > > +
    > > + for_each_possible_cpu(i) {
    >
    > Why for each possible cpu - to avoid hotplug handling?

    Touched upon this during last post: https://lkml.org/lkml/2010/12/6/49

    Regards,
    Bharata.


    \
     
     \ /
      Last update: 2011-02-17 03:57    [W:0.027 / U:0.264 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site