Messages in this thread | | | Date | Wed, 14 Oct 2009 17:20:03 +0530 | From | Bharata B Rao <> | Subject | Re: [RFC v2 PATCH 4/8] sched: Enforce hard limits by throttling |
| |
On Wed, Oct 14, 2009 at 11:17:44AM +0200, Peter Zijlstra wrote: > On Wed, 2009-10-14 at 09:11 +0530, Bharata B Rao wrote: > > On Tue, Oct 13, 2009 at 04:27:00PM +0200, Peter Zijlstra wrote: > > > On Wed, 2009-09-30 at 18:22 +0530, Bharata B Rao wrote: > > > > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > > > index 0f1ea4a..77ace43 100644 > > > > --- a/include/linux/sched.h > > > > +++ b/include/linux/sched.h > > > > @@ -1024,7 +1024,7 @@ struct sched_domain; > > > > struct sched_class { > > > > const struct sched_class *next; > > > > > > > > - void (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup); > > > > + int (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup); > > > > void (*dequeue_task) (struct rq *rq, struct task_struct *p, int sleep); > > > > void (*yield_task) (struct rq *rq); > > > > > > > > > > I really hate this, it uglfies all the enqueue code in a horrid way > > > (which is most of this patch). > > > > > > Why can't we simply enqueue the task on a throttled group just like rt? > > > > We do enqueue a task to its group even if the group is throttled. However such > > throttled groups are not enqueued further. In such scenarios, even though the > > task enqueue to its parent group succeeded, it really didn't add any task to > > the cpu runqueue (rq). So we need to identify this condition and don't > > increment rq->running. That is why this return value is needed. > > I would still consider those tasks running, the fact that they don't get > to run is a different matter.
Ok, that's how rt also considers them I realize. I thought that we should update rq->running when tasks go off the runqueue due to throttling. When a task is throttled, it is no doubt present on its group's cfs_rq, but it doesn't contribute to the CPU load as the throttled group entity isn't there on any cfs_rq. rq->running is used to obtain a few load balancing metrics and they might go wrong if rq->running isn't uptodate.
Do you still think we shouldn't update rq->running ? If so, I can get rid of this return value change.
> > This added return value really utterly craps up the code and I'm not > going to take it.
OK :) I will work towards making them more acceptable in future iterations.
> > What I'm not seeing is why all this code looks so very much different > from the rt bits.
Throttling code here looks different than rt for the following reasons:
- As I mentioned earlier, I update rq->running during throttling which is not done in rt afaics. - There are special conditions to prevent movement of tasks in and out of the throttled groups during load balancing and migration. - rt dequeues the throttled entity by walking the entity hierachy from update_curr_rt(). But I found it difficult to do the same in cfs because update_curr() is called from many different places and from places where we are actually walking the entity hiearchy. A second walk (in update_curr) of the hiearchy while we are in the middle of a hierarchy walk didn't look all that good. So I resorted to just marking the entity as throttled in update_curr() and later doing the dequeing from put_prev_entity() ? Isn't this acceptable ?
Regards, Bharata.
| |