lkml.org 
[lkml]   [2015]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V5] x86 spinlock: Fix memory corruption on completing completions
On 02/15/2015 09:47 PM, Oleg Nesterov wrote:
> Well, I regret I mentioned the lack of barrier after enter_slowpath ;)
>
> On 02/15, Raghavendra K T wrote:
>>
>> @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct static_key *key);
>>
>> static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
>> {
>> - set_bit(0, (volatile unsigned long *)&lock->tickets.tail);
>> + set_bit(0, (volatile unsigned long *)&lock->tickets.head);
>> + barrier();
>> }
>
> Because this barrier() looks really confusing.
>
> Firsty, it is equally unneeded on x86. At the same time, it can not help.
> We need a memory barrier() between set_bit(SLOWPATH) and READ_ONCE(head)
> to avoid the race with spin_unlock().
>
> So I think you should replace it with smp_mb__after_atomic() or remove it.
>

I resent the patch the above change.

>
> Other than that I believe this version is correct. So I won't insist, this
> is cosmetic after all.

Thanks Oleg.



\
 
 \ /
  Last update: 2015-02-15 18:41    [W:0.060 / U:17.512 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site