Messages in this thread | | | Date | Sat, 21 Nov 2015 19:11:29 +0100 | From | Oleg Nesterov <> | Subject | [PATCH v2 0/1] stop_machine: Remove stop_cpus_lock and lg_double_lock/unlock() |
| |
On 11/15, Oleg Nesterov wrote: > > I am also going to rediff/resend my old patch which removes lglock > from stop_machine.c, but it probably needs more discussion so I'll > send it separately.
Please see V2. It is much simpler, and it doesn't need cond_resched().
To me this looks better than changing stop_cpus() to take all stopper locks at once. Because this actually turns stopper->lock into another lglock.
Yes, with this patch cpu_stop_queue_two_works() spins in busy-wait loop if it races with stop_cpus(). But lg_double_lock() spins too, and performance-wise I think this change is a win.
To simplify the review, let me show the code with this patch applied. The patch simply adds "bool stop_cpus_in_progress" set by queue_stop_cpus_work() and checked by cpu_stop_queue_two_works().
static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1, int cpu2, struct cpu_stop_work *work2) { struct cpu_stopper *stopper1 = per_cpu_ptr(&cpu_stopper, cpu1); struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2); int err; retry: spin_lock_irq(&stopper1->lock); spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
err = -ENOENT; if (!stopper1->enabled || !stopper2->enabled) goto unlock; /* * Ensure that if we race with __stop_cpus() the stoppers won't get * queued up in reverse order leading to system deadlock. * * We can't miss stop_cpus_in_progress if queue_stop_cpus_work() has * queued a work on cpu1 but not on cpu2, we hold both locks. * * It can be falsely true but it is safe to spin until it is cleared, * queue_stop_cpus_work() does everything under preempt_disable(). */ err = -EDEADLK; if (unlikely(stop_cpus_in_progress)) goto unlock;
err = 0; __cpu_stop_queue_work(stopper1, work1); __cpu_stop_queue_work(stopper2, work2); unlock: spin_unlock(&stopper2->lock); spin_unlock_irq(&stopper1->lock);
if (unlikely(err == -EDEADLK)) { while (stop_cpus_in_progress) cpu_relax(); goto retry; } return err; }
static bool queue_stop_cpus_work(const struct cpumask *cpumask, cpu_stop_fn_t fn, void *arg, struct cpu_stop_done *done) { struct cpu_stop_work *work; unsigned int cpu; bool queued = false;
/* * Disable preemption while queueing to avoid getting * preempted by a stopper which might wait for other stoppers * to enter @fn which can lead to deadlock. */ preempt_disable(); stop_cpus_in_progress = true; for_each_cpu(cpu, cpumask) { work = &per_cpu(cpu_stopper.stop_work, cpu); work->fn = fn; work->arg = arg; work->done = done; if (cpu_stop_queue_work(cpu, work)) queued = true; } stop_cpus_in_progress = false; preempt_enable();
return queued; }
Oleg.
| |