lkml.org 
[lkml]   [2015]   [Feb]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC PATCH] arm64: cmpxchg.h: Bring ldxr and stxr closer
On Fri, Feb 27, 2015 at 3:15 PM, Will Deacon <will.deacon@arm.com> wrote:
> On Fri, Feb 27, 2015 at 08:09:17PM +0000, Pranith Kumar wrote:
>> ARM64 documentation recommends keeping exclusive loads and stores as close as
>> possible. Any instructions which do not depend on the value loaded should be
>> moved outside.
>>
>> In the current implementation of cmpxchg(), there is a mov instruction which can
>> be pulled before the load exclusive instruction without any change in
>> functionality. This patch does that change.
>>
>> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
>> ---
>> arch/arm64/include/asm/cmpxchg.h | 10 +++++-----
>> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> [...]
>
>> @@ -166,11 +166,11 @@ static inline int __cmpxchg_double(volatile void *ptr1, volatile void *ptr2,
>> VM_BUG_ON((unsigned long *)ptr2 - (unsigned long *)ptr1 != 1);
>> do {
>> asm volatile("// __cmpxchg_double8\n"
>> + " mov %w0, #0\n"
>> " ldxp %0, %1, %2\n"
>
> Seriously, you might want to test this before you mindlessly make changes to
> low-level synchronisation code. Not only is the change completely unnecessary
> but it is actively harmful.
>

Oops, I apologize for this. I should have looked more closely. It is
wrong to do this in cmpxchg_double(). What about the other cases?


--
Pranith


\
 
 \ /
  Last update: 2015-02-27 21:41    [W:0.069 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site