Messages in this thread Patch in this message | | | From | Waiman Long <> | Subject | [PATCH] sched: Don't call any kfree*() API in do_set_cpus_allowed() | Date | Mon, 30 Oct 2023 20:14:18 -0400 |
| |
Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") added a kfree() call to free any user provided affinity mask, if present. It was changed later to use kfree_rcu() in commit 9a5418bc48ba ("sched/core: Use kfree_rcu() in do_set_cpus_allowed()") to avoid a circular locking dependency problem.
It turns out that even kfree_rcu() isn't safe for avoiding circular locking problem. As reported by kernel test robot, the following circular locking dependency still exists:
&rdp->nocb_lock --> rcu_node_0 --> &rq->__lock
So no kfree*() API can be used in do_set_cpus_allowed(). To prevent memory leakage, the unused user provided affinity mask is now saved in a lockless list to be reused later by subsequent sched_setaffinity() calls.
Without kfree_rcu(), the internal cpumask_rcuhead union can be removed too as a lockless list entry only holds a single pointer.
Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202310302207.a25f1a30-oliver.sang@intel.com Signed-off-by: Waiman Long <longman@redhat.com> --- kernel/sched/core.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 802551e0009b..f536d11a284e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2789,6 +2789,11 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) set_next_task(rq, p); } +/* + * A lockless list of free cpumasks to be used for user cpumasks. + */ +static LLIST_HEAD(cpumask_free_lhead); + /* * Used for kthread_bind() and select_fallback_rq(), in both cases the user * affinity (if any) should be destroyed too. @@ -2800,29 +2805,29 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) .user_mask = NULL, .flags = SCA_USER, /* clear the user requested mask */ }; - union cpumask_rcuhead { - cpumask_t cpumask; - struct rcu_head rcu; - }; __do_set_cpus_allowed(p, &ac); /* - * Because this is called with p->pi_lock held, it is not possible - * to use kfree() here (when PREEMPT_RT=y), therefore punt to using - * kfree_rcu(). + * We can't call any kfree*() API here as p->pi_lock and/or rq lock + * may be held. So we save it in a llist to be reused in the next + * sched_setaffinity() call. */ - kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu); + if (ac.user_mask) + llist_add((struct llist_node *)ac.user_mask, &cpumask_free_lhead); } static cpumask_t *alloc_user_cpus_ptr(int node) { - /* - * See do_set_cpus_allowed() above for the rcu_head usage. - */ - int size = max_t(int, cpumask_size(), sizeof(struct rcu_head)); + struct cpumask *pmask = NULL; + + if (!llist_empty(&cpumask_free_lhead)) + pmask = (struct cpumask *)llist_del_first(&cpumask_free_lhead); + + if (!pmask) + pmask = kmalloc_node(cpumask_size(), GFP_KERNEL, node); - return kmalloc_node(size, GFP_KERNEL, node); + return pmask; } int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, -- 2.39.3
| |