Messages in this thread | | | Date | Mon, 11 Nov 2019 09:43:13 +0100 | From | Peter Zijlstra <> | Subject | Re: [RFC PATCH 4/4] irq_work: Weaken ordering in irq_work_run_list() |
| |
On Fri, Nov 08, 2019 at 05:08:58PM +0100, Frederic Weisbecker wrote:
> diff --git a/kernel/irq_work.c b/kernel/irq_work.c > index 49c53f80a13a..b709ab05cbfd 100644 > --- a/kernel/irq_work.c > +++ b/kernel/irq_work.c > @@ -34,8 +34,8 @@ static bool irq_work_claim(struct irq_work *work) > oflags = atomic_fetch_or(IRQ_WORK_CLAIMED, &work->flags); > /* > * If the work is already pending, no need to raise the IPI. > + * The pairing atomic_andnot() followed by a barrier in irq_work_run() > + * makes sure everything we did before is visible. > */ > if (oflags & IRQ_WORK_PENDING) > return false;
> @@ -151,14 +151,16 @@ static void irq_work_run_list(struct llist_head *list) > * to claim that work don't rely on us to handle their data > * while we are in the middle of the func. > */ > - flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags); > + atomic_andnot(IRQ_WORK_PENDING, &work->flags); > + smp_mb__after_atomic();
I think I'm prefering you use:
flags = atomic_fetch_andnot_acquire(IRQ_WORK_PENDING, &work->flags);
Also, I'm cursing at myself for the horrible comments here.
> work->func(work); > /* > * Clear the BUSY bit and return to the free state if > * no-one else claimed it meanwhile. > */ > - (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); > + (void)atomic_cmpxchg(&work->flags, flags & ~IRQ_WORK_PENDING, > + flags & ~IRQ_WORK_CLAIMED); > } > } > > -- > 2.23.0 >
| |