Messages in this thread Patch in this message | | | From | Thomas Gleixner <> | Subject | Re: [patch v12 09/13] task isolation: add preempt notifier to sync per-CPU vmstat dirty info to thread info | Date | Wed, 27 Apr 2022 14:09:16 +0200 |
| |
On Wed, Apr 27 2022 at 09:11, Thomas Gleixner wrote: > On Tue, Mar 15 2022 at 12:31, Marcelo Tosatti wrote: >> If a thread has task isolation activated, is preempted by thread B, >> which marks vmstat information dirty, and is preempted back in, >> one might return to userspace with vmstat dirty information on the >> CPU in question. >> >> To address this problem, add a preempt notifier that transfers vmstat dirty >> information to TIF_TASK_ISOL thread flag. > > How does this compile with CONFIG_KVM=n?
Aside of that, the existance of this preempt notifier alone tells me that this is either a design fail or has no design in the first place.
The state of vmstat does not matter at all at the point where a task is scheduled in. It matters when an isolated task goes out to user space or enters a VM.
We already have something similar in the exit to user path:
tick_nohz_user_enter_prepare()
So you can do something like the below and have:
static inline void task_isol_exit_to_user_prepare(void) { if (unlikely(current_needs_isol_exit_to_user()) __task_isol_exit_to_user_prepare(); }
where current_needs_isol_exit_to_user() is a simple check of either an existing mechanism like
task->syscall_work & SYSCALL_WORK_TASK_ISOL_EXIT
or of some new task isolation specific member of task_struct which is placed so it is cache hot at that point:
task->isol_work & SYSCALL_TASK_ISOL_EXIT
which is going to be almost zero overhead for any non isolated task.
It's trivial enough to encode the real stuff into task->isol_work and I'm pretty sure, that a 32bit member is sufficient for that. There is absolutely no need for a potential 64x64 bit feature matrix.
Thanks,
tglx --- --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -142,6 +142,12 @@ void noinstr exit_to_user_mode(void) /* Workaround to allow gradual conversion of architecture code */ void __weak arch_do_signal_or_restart(struct pt_regs *regs) { } +static void exit_to_user_update_work(void) +{ + tick_nohz_user_enter_prepare(); + task_isol_exit_to_user_prepare(); +} + static unsigned long exit_to_user_mode_loop(struct pt_regs *regs, unsigned long ti_work) { @@ -178,8 +184,7 @@ static unsigned long exit_to_user_mode_l */ local_irq_disable_exit_to_user(); - /* Check if any of the above work has queued a deferred wakeup */ - tick_nohz_user_enter_prepare(); + exit_to_user_update_work(); ti_work = read_thread_flags(); } @@ -194,8 +199,7 @@ static void exit_to_user_mode_prepare(st lockdep_assert_irqs_disabled(); - /* Flush pending rcuog wakeup before the last need_resched() check */ - tick_nohz_user_enter_prepare(); + exit_to_user_update_work(); if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) ti_work = exit_to_user_mode_loop(regs, ti_work);
| |