lkml.org 
[lkml]   [2021]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 0/4] sched/fair: Burstable CFS bandwidth controller
From
Date


> On Mar 10, 2021, at 7:11 PM, Odin Ugedal <odin@ugedal.com> wrote:
>
> Hi,
>
>> If there are cases where the "start bandwidth" matters, I think there is need to expose the
>> "start bandwidth" explicitly too. However, I doubt the existence of such cases from my view
>> and the two examples above.
>
> Yeah, I don't think there will be any cases where users will be
> "depending" on having burst available,
> so I agree in that sense.
>
>> In my thoughts, this patchset keeps cgroup usage within the quota in the longer term, and allows
>> cgroup to respond to a burst of work with the help of a reasonable burst buffer. If quota is set correctly
>> above average usage, and enough burst buffer is set to meet the needs of bursty work. In this
>> case, it makes no difference whether this cgroup runs with 0 start bandwidth or all of it.
>> Thus I used sysctl_sched_cfs_bw_burst_onset_percent to decided the start bandwidth
>> to leave some convenience here. If this sysctl interface is confusing, I wonder whether it
>> is a good idea not to expose this interface.
>>
>> For the first case mentioned above, if Kubernet users care the "start bandwidth" for process startup,
>> maybe it is better to give all of it rather than a part?
>
> Yeah, I am a bit afraid there will be some confusion, so not sure if
> the sysctl is the best way to do it.
>
> But I would like feedback from others to highlight the problem as
> well, that would be helpful. I think a simple "API"
> where you get 0 burst or full burst on "set" (the one we decide on)
> would be best to avoid unnecessary complexity.
>
> Start burst when starting up a new process in a new cgroup might be
> helpful, so maybe that is a vote for
> full burst? However, in long term that doesn't matter, so 0 burst on
> start would work as well.
>
>> For the second case with quota changes over time, I think it is important making sure each change works
>> long enough to enforce average quota limit. Does it really matter to control "start burst" on each change?
>
> No, I don't think so. Doing so would be another thing to set per
> cgroup, and that would just clutter the api
> more than necessary imo., since we cannot come up with any real use cases.
>
>> It is an copy of runtime at period start, and used to calculate burst time during a period.
>> Not quite remaining_runtime_prev_period.
>
> Ahh, I see, I misunderstood the code. So in a essence it is
> "runtime_at_period_start"?
>

Yes, it is "runtime_at_preiod_start".

>> Yeah, there is the updating problem. It is okey not to expose cfs_b->runtime then.
>
> Yeah, I think dropping it all together is the best solution.
>
>
>> This comment does not mean any loss any unnecessary throttle for present cfsb.
>> All this means is that all quota refilling that is not done during timer stop should be
>> refilled on timer start, for the burstable cfsb.
>>
>> Maybe I shall change this comment in some way if it is misleading?
>
> I think I formulated my question badly. The comment makes sense, I am
> just trying to compare how "start_cfs_bandwidth"
> works after your patch compared to how it works currently. As I
> understand, without this patch "start_cfs_bandwidth" will
> never refill runtime, while with your patch, it will refill even when
> overrun=0 with burst disabled. Is that an intended change in
> behavior, or am I not understanding the patch?
>

Good point. The way "start_cfs_bandwidth" works is changed indeed. The present cfs_b doesn't
have to refill bandwidth because quota is not used during the period before timer stops. With this patch,
runtime is refilled no matter burst is enabled or not. Do you suggest not refilling runtime unless burst
is enabled here?

>
> On another note, I have also been testing this patch, and I am not
> able to reproduce your schbench results. Both with and without burst,
> it gets the same result, and no nr_throttled stays at 0 (tested on a
> 32-core system). Can you try to rerun your tests with the mainline
> to see if you still get the same results? (Also, I see you are running
> with 30 threads. How many cores do your test setup have?). To actually
> say that the result is real, all cores used should maybe be
> exclusively reserved as well, to avoid issues where other processes
> cause a
> spike in latency.
>

Spikes indeed cause trouble. If nr_throttle stays at 0, I suggest change quota from 700000 to 600000,
which is still above the average utilization 500%. I have rerun on a 64-core system and reproduced the
results. And I think it should work on a 32-core system too, as there are 20 active workers in each round.

If you still have trouble, I suggest test in the following way. And it should work on a two-core system.

mkdir /sys/fs/cgroup/cpu/test
echo $$ > /sys/fs/cgroup/cpu/test/cgroup.procs
echo 100000 > /sys/fs/cgroup/cpu/test/cpu.cfs_quota_us
echo 300000 > /sys/fs/cgroup/cpu/test/cpu.cfs_burst_us

./schbench -m 1 -t 3 -r 20 -c 200000 -R 4

On my machine, two workers work for 200ms and sleep for 300ms in each round. The average utilization is
around 80%.


>
> Odin

\
 
 \ /
  Last update: 2021-03-12 14:26    [W:1.994 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site