lkml.org 
[lkml]   [2015]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 1/3] sched: sync a se with its cfs_rq when attaching and dettaching
Hi,

On Mon, Aug 17, 2015 at 04:45:50PM +0900, byungchul.park@lge.com wrote:
> From: Byungchul Park <byungchul.park@lge.com>
>
> current code is wrong with cfs_rq's avg loads when changing a task's
> cfs_rq to another. i tested with "echo pid > cgroup" and found that
> e.g. cfs_rq->avg.load_avg became larger and larger whenever i changed
> a cgroup to another again and again. we have to sync se's avg loads
> with both *prev* cfs_rq and next cfs_rq when changing its group.
>

my simple think about above, may be nothing or wrong, just ignore it.

if a load balance migration happened just before cgroup change, prev
cfs_rq and next cfs_rq will be on different cpu. migrate_task_rq_fair()
and update_cfs_rq_load_avg() will sync and remove se's load avg from
prev cfs_rq. whether or not queued, well done. dequeue_task() decay se
and pre_cfs before calling task_move_group_fair(). after set cfs_rq in
task_move_group_fair(), if queued, se's load avg do not add to next
cfs_rq(try set last_update_time to 0 like migration to add), if !queued,
also need to add se's load avg to next cfs_rq.

if no load balance migration happened when change cgroup. prev cfs_rq
and next cfs_rq may be on same cpu(not sure), this time, need to remove
se's load avg by ourself, also need to add se's load avg on next cfs_rq.

thinks,
--
Tao


\
 
 \ /
  Last update: 2015-08-18 19:01    [W:0.076 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site