lkml.org 
[lkml]   [2011]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Test for CFS Bandwidth Control V6
(2011/06/08 11:54), Paul Turner wrote:
> On Mon, May 23, 2011 at 5:53 PM, Hidetoshi Seto
> <seto.hidetoshi@jp.fujitsu.com> wrote:
>
>> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
>> index 3936393..544072f 100644
>> --- a/kernel/sched_fair.c
>> +++ b/kernel/sched_fair.c
>> @@ -1537,7 +1537,7 @@ static void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
>> walk_tg_tree_from(cfs_rq->tg, tg_unthrottle_down, tg_nop,
>> (void *)&udd);
>>
>> - if (!cfs_rq->load.weight)
>> + if (!cfs_rq->h_nr_running)
>> return;
>>
>
> Why change here?

I've confused a bit - just curious if by any chance there is throttled
cfs_rq that have (load.weight, h_nr_running) = (0, >0).


>> task_delta = cfs_rq->h_nr_running;
>> @@ -1843,10 +1843,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>> cfs_rq->h_nr_running++;
>>
>> /* end evaluation on throttled cfs_rq */
>> - if (cfs_rq_throttled(cfs_rq)) {
>> - se = NULL;
>
> Hmm.. yeah this is a casualty of moving the h_nr_running computations
> in-line as a part of the previous refactoring within the last
> releases. This optimization (setting se = NULL to skip the second
> half) obviously won't work properly with detecting whether we made it
> to the end of the tree.
>
(snip)
>
> How about instead something like the following. We can actually take
> advantage of the second loop always executing by deferring the
> accounting update on a throttle entity. This keeps the control flow
> within dequeue_task_fair linear.
>
> What do you think of (untested):
>
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1744,13 +1744,12 @@ enqueue_task_fair(struct rq *rq, struct
> task_struct *p, int flags)
> break;
> cfs_rq = cfs_rq_of(se);
> enqueue_entity(cfs_rq, se, flags);
> - cfs_rq->h_nr_running++;
>
> - /* end evaluation on throttled cfs_rq */
> - if (cfs_rq_throttled(cfs_rq)) {
> - se = NULL;
> + /* note: ordering with throttle check to perform
> h_nr_running accounting on throttled entity below */
> + if (cfs_rq_throttled(cfs_rq))
> break;
> - }
> +
> + cfs_rq->h_nr_running++;
> flags = ENQUEUE_WAKEUP;
> }
>
> @@ -1786,13 +1785,12 @@ static void dequeue_task_fair(struct rq *rq,
> struct task_struct *p, int flags)
> for_each_sched_entity(se) {
> cfs_rq = cfs_rq_of(se);
> dequeue_entity(cfs_rq, se, flags);
> - cfs_rq->h_nr_running--;
>
> - /* end evaluation on throttled cfs_rq */
> - if (cfs_rq_throttled(cfs_rq)) {
> - se = NULL;
> + /* note: ordering with throttle check to perform
> h_nr_running accounting on throttled entity below */
> + if (cfs_rq_throttled(cfs_rq))
> break;
> - }
> +
> + cfs_rq->h_nr_running--;
> /* Don't dequeue parent if it has other entities besides

Looks good if it abides by the nature of scheduler codes ;-)


Thanks,
H.Seto



\
 
 \ /
  Last update: 2011-06-08 07:57    [W:0.074 / U:0.560 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site