lkml.org 
[lkml]   [2019]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH v2] ubsan: Avoid unnecessary 128-bit shifts
    From
    Date
    On 03/04/2019 07.45, George Spelvin wrote:
    >
    > diff --git a/lib/ubsan.c b/lib/ubsan.c
    > index e4162f59a81c..a7eb55fbeede 100644
    > --- a/lib/ubsan.c
    > +++ b/lib/ubsan.c
    > @@ -89,8 +89,8 @@ static bool is_inline_int(struct type_descriptor *type)
    > static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
    > {
    > if (is_inline_int(type)) {
    > - unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type);
    > - return ((s_max)val) << extra_bits >> extra_bits;
    > + unsigned extra_bits = sizeof(val)*8 - type_bit_width(type);
    > + return (signed long)val << extra_bits >> extra_bits;
    > }

    Maybe add some #ifdef BITS_PER_LONG == 64 #define sign_extend_long
    sign_extend[32/64] stuff to linux/bitops.h and write this as
    sign_extend_long(val, type_bit_width(type)-1)? Or do it locally in
    lib/ubsan.c so that "git grep" will tell that it's available once the
    next potential user comes along.

    Btw., ubsan.c is probably compiled without instrumentation, but it would
    be a nice touch to avoid UB in the implementation anyway (i.e., the left
    shift should be done in the unsigned type, then cast to signed and
    right-shifted).

    Rasmus

    \
     
     \ /
      Last update: 2019-04-03 08:51    [W:3.281 / U:0.312 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site