lkml.org 
[lkml]   [2017]   [May]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH v2 1/2] sched/fair: Fix how load gets propagated from cfs_rq to its sched_entity
    From
    Date
    Hi Tejun,

    On 04/05/17 18:39, Tejun Heo wrote:
    > Hello, Dietmar.
    >
    > On Thu, May 04, 2017 at 10:49:51AM +0100, Dietmar Eggemann wrote:
    >> On 04/05/17 07:21, Peter Zijlstra wrote:
    >>> On Thu, May 04, 2017 at 07:51:29AM +0200, Peter Zijlstra wrote:

    [...]

    >>
    >> I can't recreate this problem running 'numactl -N 0 ./schbench -m 2 -t
    >> 10 -s 10000 -c 15000 -r 30' on my E5-2690 v2 (IVB-EP, 2 sockets, 10
    >> cores / socket, 2 threads / core)
    >>
    >> I tried tip/sched/core comparing running in 'cpu:/' and 'cpu:/foo' and
    >>
    >> using your patch on top with all the combinations of {NO_}FUDGE,
    >> {NO_}FUDGE2 with prop_type=shares_avg or prop_type_runnable.
    >>
    >> Where you able to see the issue on tip/sched/core w/o your patch on your
    >> machine?
    >>
    >> The workload of n 60% periodic tasks on n logical cpus always creates a
    >> very stable task distribution for me.
    >
    > It depends heavily on what else is going on in the system. On the
    > test systems that I'm using, there's always something not-too-heavy
    > going on. The pattern over time isn't too varied and the latency
    > results are usually stable and the grouping of results is very clear
    > as the difference between the load balancer working properly and not
    > shows up as upto an order of magnitude difference in p99 latencies.

    OK, that make sense. You do need the light (independent from schbench)
    background noise to create work for the load balancer.

    I switched to my Hikey board (hot-plugged out the 2. cluster, so 4
    remaining cores with performance governor) because we should see the
    effect regardless of the topology. There is no background noise on my
    debian fs.

    That's why I don't see any effect if I increase the C/S
    (cputime/sleeptime) ratio when running 'schbench -m 2 -t 2 -s S -c C -r
    30'. The only source of disturbance are some additional schbench threads
    which sometimes force one of the worker threads to get co-scheduled with
    another worker thread.

    https://drive.google.com/file/d/0B2f-ZAwV_YnmTDhWUk5ZRHdBRUU/view shows
    such a case where the additional schbench thread 'schbench-2206' (green
    marker line in picture) forces the worker thread 'schbench-2209' to
    wakeup migrate from cpu3 to cpu0 where he gets co-scheduled with the
    worker thread 'schbench-2210' for a while.

    > For these differences to matter, you need to push the machine so that
    > it's right at the point of saturation - e.g. increase duty cycle till
    > p99 starts to deteriorate w/o cgroup.
    >
    > Thanks.
    >

    \
     
     \ /
      Last update: 2017-05-05 12:36    [W:4.177 / U:0.108 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site