Messages in this thread | | | From | Valentin Schneider <> | Subject | Re: [RFC PATCH] sched/core: Fix premature p->migration_pending completion | Date | Fri, 05 Feb 2021 11:02:27 +0000 |
| |
On 04/02/21 15:30, Qais Yousef wrote: > On 02/03/21 18:59, Valentin Schneider wrote: >> On 03/02/21 17:23, Qais Yousef wrote: >> > On 01/27/21 19:30, Valentin Schneider wrote: >> >> Initial conditions: >> >> victim.cpus_mask = {CPU0, CPU1} >> >> >> >> CPU0 CPU1 CPU<don't care> >> >> >> >> switch_to(victim) >> >> set_cpus_allowed(victim, {CPU1}) >> >> kick CPU0 migration_cpu_stop({.dest_cpu = CPU1}) >> >> switch_to(stopper/0) >> >> // e.g. CFS load balance >> >> move_queued_task(CPU0, victim, CPU1); >> >> switch_to(victim) >> >> set_cpus_allowed(victim, {CPU0}); >> >> task_rq_unlock(); >> >> migration_cpu_stop(dest_cpu=CPU1) >> > >> > This migration stop is due to set_cpus_allowed(victim, {CPU1}), right? >> > >> >> Right >> >> >> task_rq(p) != rq && pending >> >> kick CPU1 migration_cpu_stop({.dest_cpu = CPU1}) >> >> >> >> switch_to(stopper/1) >> >> migration_cpu_stop(dest_cpu=CPU1) >> > >> > And this migration stop is due to set_cpus_allowed(victim, {CPU0}), right? >> > >> >> Nein! This is a retriggering of the "current" stopper (triggered by >> set_cpus_allowed(victim, {CPU1})), see the tail of that >> >> else if (dest_cpu < 0 || pending) >> >> branch in migration_cpu_stop(), is what I'm trying to hint at with that >> >> task_rq(p) != rq && pending > > Okay I see. But AFAIU, the work will be queued in order. So we should first > handle the set_cpus_allowed_ptr(victim, {CPU0}) before the retrigger, no? > > So I see migration_cpu_stop() running 3 times > > 1. because of set_cpus_allowed(victim, {CPU1}) on CPU0 > 2. because of set_cpus_allowed(victim, {CPU0}) on CPU1 > 3. because of retrigger of '1' on CPU0 >
On that 'CPU<don't care>' lane, I intentionally included task_rq_unlock() but not 'kick CPU1 migration_cpu_stop({.dest_cpu = CPU0})'. IOW, there is nothing in that trace that queues a stopper work for 2. - it *will* happen at some point, but harm will already have been done.
The migrate_task_to() example is potentially worse, because it doesn't rely on which stopper work gets enqueued first - only that an extra affinity change happens before the first stopper work grabs the pi_lock and completes.
| |