lkml.org 
[lkml]   [2015]   [Feb]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/13, Raghavendra K T wrote:
>
> @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
> {
> struct __raw_tickets tmp = READ_ONCE(lock->tickets);
>
> - return tmp.tail != tmp.head;
> + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG);
> }

Well, this can probably use __tickets_equal() too. But this is cosmetic.

It seems that arch_spin_is_contended() should be fixed with this change,

(__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC

can be true because of TICKET_SLOWPATH_FLAG in .head, even if it is actually
unlocked. And the "(__ticket_t)" typecast looks unnecessary, it only adds more
confusuin, but this is cosmetic too.



> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
> * check again make sure it didn't become free while
> * we weren't looking.
> */
> - if (ACCESS_ONCE(lock->tickets.head) == want) {
> + head = READ_ONCE(lock->tickets.head);
> + if (__tickets_equal(head, want)) {
> add_stats(TAKEN_SLOW_PICKUP, 1);
> goto out;

This is off-topic, but with or without this change perhaps it makes sense
to add smp_mb__after_atomic(). It is nop on x86, just to make this code
more understandable for those (for me ;) who can never remember even the
x86 rules.

Oleg.



\
 
 \ /
  Last update: 2015-02-13 17:01    [W:0.100 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site