lkml.org 
[lkml]   [2019]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH bpf-next] bpf: fix cgroup bpf release synchronization
Date
Since commit 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf
from cgroup itself"), cgroup_bpf release occurs asynchronously
(from a worker context), and before the release of the cgroup itself.

This introduced a previously non-existing race between the release
and update paths. E.g. if a leaf's cgroup_bpf is released and a new
bpf program is attached to the one of ancestor cgroups at the same
time. The race may result in double-free and other memory corruptions.

To fix the problem, let's protect the body of cgroup_bpf_release()
with cgroup_mutex, as it was effectively previously, when all this
code was called from the cgroup release path with cgroup mutex held.

Also make sure, that we don't leave already freed pointers to the
effective prog arrays. Otherwise, they can be released again by
the update path. It wasn't necessary before, because previously
the update path couldn't see such a cgroup, as cgroup_bpf and cgroup
itself were released together.

Big thanks for Tejun Heo for discovering and debugging of this
problem!

Fixes: 4bfc0bb2c60e ("bpf: decouple the lifetime of cgroup_bpf from
cgroup itself")
Reported-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Roman Gushchin <guro@fb.com>
---
kernel/bpf/cgroup.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 1b65ab0df457..3128770c0f47 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -19,6 +19,8 @@
#include <linux/bpf-cgroup.h>
#include <net/sock.h>

+#include "../cgroup/cgroup-internal.h"
+
DEFINE_STATIC_KEY_FALSE(cgroup_bpf_enabled_key);
EXPORT_SYMBOL(cgroup_bpf_enabled_key);

@@ -41,6 +43,8 @@ static void cgroup_bpf_release(struct work_struct *work)
struct bpf_prog_array *old_array;
unsigned int type;

+ mutex_lock(&cgroup_mutex);
+
for (type = 0; type < ARRAY_SIZE(cgrp->bpf.progs); type++) {
struct list_head *progs = &cgrp->bpf.progs[type];
struct bpf_prog_list *pl, *tmp;
@@ -57,10 +61,13 @@ static void cgroup_bpf_release(struct work_struct *work)
}
old_array = rcu_dereference_protected(
cgrp->bpf.effective[type],
- percpu_ref_is_dying(&cgrp->bpf.refcnt));
+ lockdep_is_held(&cgroup_mutex));
+ RCU_INIT_POINTER(cgrp->bpf.effective[type], NULL);
bpf_prog_array_free(old_array);
}

+ mutex_unlock(&cgroup_mutex);
+
percpu_ref_exit(&cgrp->bpf.refcnt);
cgroup_put(cgrp);
}
--
2.21.0
\
 
 \ /
  Last update: 2019-06-24 04:31    [W:0.057 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site