[lkml]   [2022]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH v6] x86/resctrl: Fix task CLOSID/RMID update race
When the user moves a running task to a new rdtgroup using the tasks
file interface or by deleting its rdtgroup, the resulting change in
CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the
task(s) CPUs.

x86 allows reordering loads with prior stores, so if the task starts
running between a task_curr() check that the CPU hoisted before the
stores in the CLOSID/RMID update then it can start running with the old
CLOSID/RMID until it is switched again because __rdtgroup_move_task()
failed to determine that it needs to be interrupted to obtain the new

Refer to the diagram below:

----- -----
curr <- t1->cpu->rq->curr
rq->curr <- t1
t1->{closid,rmid} -> {1,1}
t1->{closid,rmid} <- {2,2}
if (curr == t1) // false

A similar race impacts rdt_move_group_tasks(), which updates tasks in a
deleted rdtgroup.

In a memory bandwidth-metered compute host, malicious jobs could exploit
this race to remain in a previous CLOSID or RMID in order to dodge a
class-of-service downgrade imposed by an admin or to steal bandwidth.

In both cases, use smp_mb() to order the task_struct::{closid,rmid}
stores before the loads in task_curr(). In particular, in the
rdt_move_group_tasks() case, simply execute an smp_mb() on every
iteration with a matching task.

It is possible to use a single smp_mb() in rdt_move_group_tasks(), but
this would require two passes and a means of remembering which
task_structs were updated in the first loop. However, benchmarking
results below showed too little performance impact in the simple
approach to justify implementing the two-pass approach.

Times below were collected using `perf stat` to measure the time to
remove a group containing a 1600-task, parallel workload.

CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads)

# mkdir /sys/fs/resctrl/test
# echo $$ > /sys/fs/resctrl/test/tasks
# perf bench sched messaging -g 40 -l 100000

task-clock time ranges collected using:

# perf stat rmdir /sys/fs/resctrl/test

Baseline: 1.54 - 1.60 ms
smp_mb() every matching task: 1.57 - 1.67 ms

Fixes: ae28d1aae48a ("x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC MSR")
Fixes: 0efc89be9471 ("x86/intel_rdt: Update task closid immediately on CPU in rmdir and unmount")
Signed-off-by: Peter Newman <>
Reviewed-by: Reinette Chatre <>
Patch history:

- Explain exploit case in changelog for stable
- Add Fixes: lines
- Just put an smp_mb() between CLOSID/RMID stores and task_curr() calls
- Add a diagram detailing the race to the changelog
- Reorder the patches so that justification for sending more IPIs can
reference the patch fixing __rdtgroup_move_task().
- Correct tense of wording used in changelog and comments
- Split the handling of multi-task and single-task operations into
separate patches, now that they're handled differently.
- Clarify justification in the commit message, including moving some of
it out of inline code comment.
- Following Reinette's suggestion: use task_call_func() for single
task, IPI broadcast for group movements.
- Rebased to v6.1-rc4


arch/x86/kernel/cpu/resctrl/rdtgroup.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index e5a48f05e787..5993da21d822 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -580,8 +580,10 @@ static int __rdtgroup_move_task(struct task_struct *tsk,
* Ensure the task's closid and rmid are written before determining if
* the task is current that will decide if it will be interrupted.
+ * This pairs with the full barrier between the rq->curr update and
+ * resctrl_sched_in() during context switch.
- barrier();
+ smp_mb();

* By now, the task's closid and rmid are set. If the task is current
@@ -2401,6 +2403,14 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
WRITE_ONCE(t->closid, to->closid);
WRITE_ONCE(t->rmid, to->mon.rmid);

+ /*
+ * Order the closid/rmid stores above before the loads
+ * in task_curr(). This pairs with the full barrier
+ * between the rq->curr update and resctrl_sched_in()
+ * during context switch.
+ */
+ smp_mb();
* If the task is on a CPU, set the CPU in the mask.
* The detection is inaccurate as tasks might move or
base-commit: 830b3c68c1fb1e9176028d02ef86f3cf76aa2476

 \ /
  Last update: 2022-12-16 14:32    [W:0.042 / U:0.700 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site