lkml.org 
[lkml]   [2012]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC][PATCH 2/3] math128: Introduce {mult,add,cmp}_u128
On Tue, Apr 24, 2012 at 2:54 PM, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
>
> that does generate slightly better code in that it avoids some masks on
> 64bit:
>
> @@ -7,12 +7,11 @@
>  .LFB38:
>        .cfi_startproc
>        movq    %rdi, %r8
> -       movq    %rdi, %rdx
>        movq    %rsi, %rcx
> +       mov     %edi, %edx
>        shrq    $32, %r8
> -       andl    $4294967295, %edx
>        shrq    $32, %rcx
> -       andl    $4294967295, %esi
> +       mov     %esi, %esi

Oh christ.

What insane version of gcc is that? Can you please make a gcc bug-report?

Because a compiler that generates an instruction sequence like

movq %rdi,%rsi
andl $4294967295, %esi

is just so fricking stupid that it's outright buggy. That's just
crazy. It's demented. It's an "and" with all bits set.

But yeah, I do think that in general using a cast to 32-bit instead of
a mask to 32-bit is easier for the compiler. Although that still is a
particularly stupid code sequence to use.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2012-04-25 03:51    [W:0.183 / U:0.768 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site