Messages in this thread | | | Date | Fri, 23 Dec 2005 14:48:22 -0600 | Subject | Re: [PATCH] - Fix memory ordering problem in wake_futex() | From | Olof Johansson <> |
| |
On Fri, Dec 23, 2005 at 10:38:16AM -0600, Jack Steiner wrote: > > Here is a fix for a ugly race condition that occurs in wake_futex() on IA64. > > On IA64, locks are released using a "st.rel" instruction. This ensures that > preceding "stores" are visible before the lock is released but does NOT prevent > a "store" that follows the "st.rel" from becoming visible before the "st.rel". > The result is that the task that owns the futex_q continues prematurely. > > The failure I saw is the task that owned the futex_q resumed prematurely and > was context-switch off of the cpu. The task's switch_stack occupied the same > space of the futex_q. The store to q->lock_ptr overwrote the ar.bspstore in the > switch_stack. When the task resumed, it ran with a corrupted ar.bspstore. > Things went downhill from there. > > Without the fix, the application fails roughly every 10 minutes. With > the fix, it ran 16 hours without a failure.
So what happened to what the comment 10 lines above your patch says?
/* * The lock in wake_up_all() is a crucial memory barrier after * the list_del_init() and also before assigning to q->lock_ptr. */
On PPC64, the spinlock unlock path has a sync in there for the very purpose of adding the write barrier. Maybe the ia64 unlock path is missing something similar?
-Olof - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |