lkml.org 
[lkml]   [2021]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] sched: cgroup SCHED_IDLE support
On Sat, Jun 26, 2021 at 2:57 AM Tejun Heo <tj@kernel.org> wrote:
[snip]
>
> Would you care to share some concrete use cases?
>
> Thank you.
>
> --
> tejun

Sure thing, there are two use cases we've found compelling:

1. On a machine, different users are given their own top-level cgroup
(configured with an appropriate number of shares). Each user is free
to spawn any threads and create any additional cgroups within their
top-level group.
Some users would like to run high priority, latency-sensitive work
(for example, responding to an RPC) as well as some batch tasks (ie.
background work such as data manipulation, transcoding, etc.) within
their cgroup. The batch tasks should interfere minimally with the high
priority work. However, it is still desired that this batch work be
considered the same as the high priority work vs the jobs of some
other user on the machine.

To achieve this, the user sets up two sub-cgroups, one of which is
marked as idle. The idle cgroup will always be preempted on wakeup of
a task in the other sub-cgroup (but not a wakeup of another user's
task). This is not possible with the per-task SCHED_IDLE setting.
Cgroup shares/weight alone is also not as strong as SCHED_IDLE.

2. We can create a top-level idle cgroup in which we place users who
want to run some best-effort work (ie. some long running
computations). Since it is the top-level cgroup that is marked idle,
any other task on the machine will always preempt something running
within the top-level idle cgroup. We can also easily maintain the
relative weights between different users within the idle group.

This top-level idle group allows for soaking otherwise unused cycles,
and offers cheap machine quota for users who have latency-tolerant
jobs.

\
 
 \ /
  Last update: 2021-06-29 06:58    [W:0.062 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site