lkml.org 
[lkml]   [2010]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/2] The new jhash implementation
From
Date
Le jeudi 25 novembre 2010 à 21:55 +0800, Changli Gao a écrit :

> > I suggest :
> >
> > #include <linux/unaligned/packed_struct.h>
> > ...
> > a += __get_unaligned_cpu32(k);
> > b += __get_unaligned_cpu32(k+4);
> > c += __get_unaligned_cpu32(k+8);
> >
> > Fits nicely in registers.
> >
>
> I think you mean get_unaligned_le32().
>

No, I meant __get_unaligned_cpu32()

We do same thing in jhash2() :

a += k[0];
b += k[1];
c += k[2];

We dont care of bit order of the 32bit quantity we are adding to a,b or
c , as long its consistent for the current machine ;)

get_unaligned_le32() would be slow on big endian arches.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-11-25 15:09    [W:0.075 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site