lkml.org 
[lkml]   [2019]   [Nov]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Patch v4 0/6] Introduce Thermal Pressure
    Hi Thara,

    On Thursday 31 Oct 2019 at 12:41:20 (-0400), Thara Gopinath wrote:
    [...]
    > >> Regarding testing, basic build, boot and sanity testing have been
    > >> performed on db845c platform with debian file system.
    > >> Further, dhrystone and hackbench tests have been
    > >> run with the thermal pressure algorithm. During testing, due to
    > >> constraints of step wise governor in dealing with big little systems,
    > >> trip point 0 temperature was made assymetric between cpus in little
    > >> cluster and big cluster; the idea being that
    > >> big core will heat up and cpu cooling device will throttle the
    > >> frequency of the big cores faster, there by limiting the maximum available
    > >> capacity and the scheduler will spread out tasks to little cores as well.
    > >>
    > >
    > > Can you please share the changes you've made to sdm845.dtsi and a kernel
    > > base on top of which to apply your patches? I would like to reproduce
    > > your results and run more tests and it would be good if our setups were
    > > as close as possible.
    > Hi Ionela
    > Thank you for the review.
    > So I tested this on 5.4-rc1 kernel. The dtsi changes is to reduce the
    > thermal trip points for the big CPUs to 60000 or 70000 from the default
    > 90000. I did this for 2 reasons
    > 1. I could never get the db845 to heat up sufficiently for my test cases
    > with the default trip.
    > 2. I was using the default step-wise governor for thermal. I did not
    > want little and big to start throttling by the same % because then the
    > task placement ratio will remain the same between little and big cores.
    >
    >

    Some early testing on this showed that when setting the trip point to
    60000 for the big CPUs and the big cluster, and running hackbench (1
    group, 30000 loops) the cooling state of the big cluster results in
    always being set to the maximum (the lowest OPP), which results in
    capacity inversion (almost) continuously.

    For 70000 the average cooling state of the bigs is around 20 so it
    will leave a few more OPPs available on the bigs more of the time,
    but probably the capacity of bigs is mostly lower than the capacity
    of little CPUs, during this test as well.

    I think that explains the difference in results that you obtained
    below. This is good as it shows that thermal pressure is useful but
    it shouldn't show much difference between the different decay
    periods, as can also be observed in your results below.

    This being said, I did not obtained such significant results on my
    side by I'll try again with the kernel you've pointed me to offline.

    Thanks,
    Ionela.

    > >
    > >> Test Results
    > >>
    > >> Hackbench: 1 group , 30000 loops, 10 runs
    > >> Result SD
    > >> (Secs) (% of mean)
    > >> No Thermal Pressure 14.03 2.69%
    > >> Thermal Pressure PELT Algo. Decay : 32 ms 13.29 0.56%
    > >> Thermal Pressure PELT Algo. Decay : 64 ms 12.57 1.56%
    > >> Thermal Pressure PELT Algo. Decay : 128 ms 12.71 1.04%
    > >> Thermal Pressure PELT Algo. Decay : 256 ms 12.29 1.42%
    > >> Thermal Pressure PELT Algo. Decay : 512 ms 12.42 1.15%
    > >>
    > >> Dhrystone Run Time : 20 threads, 3000 MLOOPS
    > >> Result SD
    > >> (Secs) (% of mean)
    > >> No Thermal Pressure 9.452 4.49%
    > >> Thermal Pressure PELT Algo. Decay : 32 ms 8.793 5.30%
    > >> Thermal Pressure PELT Algo. Decay : 64 ms 8.981 5.29%
    > >> Thermal Pressure PELT Algo. Decay : 128 ms 8.647 6.62%
    > >> Thermal Pressure PELT Algo. Decay : 256 ms 8.774 6.45%
    > >> Thermal Pressure PELT Algo. Decay : 512 ms 8.603 5.41%
    > >>
    > >
    > > Do you happen to know by how much the CPUs were capped during these
    > > experiments?
    >
    > I don't have any captured results here. I know that big cores were
    > capped and at times there was capacity inversion.
    >
    > Also I will fix the nit comments above.
    >
    > >
    > > Thanks,
    > > Ionela.
    > >
    >
    >
    >
    > --
    > Warm Regards
    > Thara

    \
     
     \ /
      Last update: 2019-11-05 22:05    [W:4.444 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site