lkml.org 
[lkml]   [2017]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [v4 3/4] mm, oom: introduce oom_priority for memory cgroups
On Wed, 26 Jul 2017, Roman Gushchin wrote:

> Introduce a per-memory-cgroup oom_priority setting: an integer number
> within the [-10000, 10000] range, which defines the order in which
> the OOM killer selects victim memory cgroups.
>
> OOM killer prefers memory cgroups with larger priority if they are
> populated with elegible tasks.
>
> The oom_priority value is compared within sibling cgroups.
>
> The root cgroup has the oom_priority 0, which cannot be changed.
>

Awesome! Very excited to see that you implemented this suggestion and it
is similar to priority based oom killing that we have done. I think this
kind of support is long overdue in the oom killer.

Comment inline.

> Signed-off-by: Roman Gushchin <guro@fb.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Cc: kernel-team@fb.com
> Cc: cgroups@vger.kernel.org
> Cc: linux-doc@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
> include/linux/memcontrol.h | 3 +++
> mm/memcontrol.c | 55 ++++++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 56 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index b21bbb0edc72..d31ac58e08ad 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -206,6 +206,9 @@ struct mem_cgroup {
> /* cached OOM score */
> long oom_score;
>
> + /* OOM killer priority */
> + short oom_priority;
> +
> /* handle for "memory.events" */
> struct cgroup_file events_file;
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ba72d1cf73d0..2c1566995077 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2710,12 +2710,21 @@ static void select_victim_memcg(struct mem_cgroup *root, struct oom_control *oc)
> for (;;) {
> struct cgroup_subsys_state *css;
> struct mem_cgroup *memcg = NULL;
> + short prio = SHRT_MIN;
> long score = LONG_MIN;
>
> css_for_each_child(css, &root->css) {
> struct mem_cgroup *iter = mem_cgroup_from_css(css);
>
> - if (iter->oom_score > score) {
> + if (iter->oom_score == 0)
> + continue;
> +
> + if (iter->oom_priority > prio) {
> + memcg = iter;
> + prio = iter->oom_priority;
> + score = iter->oom_score;
> + } else if (iter->oom_priority == prio &&
> + iter->oom_score > score) {
> memcg = iter;
> score = iter->oom_score;
> }

Your tiebreaking is done based on iter->oom_score, which I suppose makes
sense given that the oom killer traditionally tries to kill from the
largest memory hogging process.

We actually tiebreak on a timestamp of memcg creation and prefer to kill
from the newer memcg when iter->oom_priority is the same. The reasoning
is that we schedule jobs on a machine that have an inherent priority but
is unaware of other jobs running at the same priority and so the kill
decision, if based on iter->oom_score, may differ based on current memory
usage.

I'm not necessarily arguing against using iter->oom_score, but was
wondering if you would also find that tiebreaking based on a timestamp
when priorities are the same is a more clear semantic to describe? It's
similar to how the system oom killer tiebreaked based on which task_struct
appeared later in the tasklist when memory usage was the same.

Your approach makes oom killing less likely in the near term since it
kills a more memory hogging memcg, but has the potential to lose less
work. A timestamp based approach loses the least amount of work by
preferring to kill newer memcgs but oom killing may be more frequent if
smaller child memcgs are killed. I would argue the former is the
responsibility of the user for using the same priority.

> @@ -2782,7 +2791,15 @@ bool mem_cgroup_select_oom_victim(struct oom_control *oc)
> * For system-wide OOMs we should consider tasks in the root cgroup
> * with oom_score larger than oc->chosen_points.
> */
> - if (!oc->memcg) {
> + if (!oc->memcg && !(oc->chosen_memcg &&
> + oc->chosen_memcg->oom_priority > 0)) {
> + /*
> + * Root memcg has priority 0, so if chosen memcg has lower
> + * priority, any task in root cgroup is preferable.
> + */
> + if (oc->chosen_memcg && oc->chosen_memcg->oom_priority < 0)
> + oc->chosen_points = 0;
> +
> select_victim_root_cgroup_task(oc);
>
> if (oc->chosen && oc->chosen_memcg) {
> @@ -5373,6 +5390,34 @@ static ssize_t memory_oom_kill_all_tasks_write(struct kernfs_open_file *of,
> return nbytes;
> }
>
> +static int memory_oom_priority_show(struct seq_file *m, void *v)
> +{
> + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m));
> +
> + seq_printf(m, "%d\n", memcg->oom_priority);
> +
> + return 0;
> +}
> +
> +static ssize_t memory_oom_priority_write(struct kernfs_open_file *of,
> + char *buf, size_t nbytes, loff_t off)
> +{
> + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> + int oom_priority;
> + int err;
> +
> + err = kstrtoint(strstrip(buf), 0, &oom_priority);
> + if (err)
> + return err;
> +
> + if (oom_priority < -10000 || oom_priority > 10000)
> + return -EINVAL;
> +
> + memcg->oom_priority = (short)oom_priority;
> +
> + return nbytes;
> +}
> +
> static int memory_events_show(struct seq_file *m, void *v)
> {
> struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m));
> @@ -5499,6 +5544,12 @@ static struct cftype memory_files[] = {
> .write = memory_oom_kill_all_tasks_write,
> },
> {
> + .name = "oom_priority",
> + .flags = CFTYPE_NOT_ON_ROOT,
> + .seq_show = memory_oom_priority_show,
> + .write = memory_oom_priority_write,
> + },
> + {
> .name = "events",
> .flags = CFTYPE_NOT_ON_ROOT,
> .file_offset = offsetof(struct mem_cgroup, events_file),

\
 
 \ /
  Last update: 2017-08-09 01:15    [W:1.914 / U:0.484 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site