lkml.org 
[lkml]   [2012]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 01/16] math128: Introduce various 128bit primitives
On Thu, Oct 25, 2012 at 6:47 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, 2012-10-24 at 16:18 -0700, Linus Torvalds wrote:
>>
>> So please, explain what the pressing need is that is so worthwhile
>> that this is worth it. Maybe it was in a 00/16 cover letter, but not
>> only was that not sent out to the people who got 01, you'd still want
>> it in the commit message.
>
> There's two use cases:
>
> 1) the proposed SCHED_DEADLINE needs to do some u64xu64 math, it
> ends up having to multiply a deadline (in usec) with runtime (also
> in usec).
>
> 2) the infrastructure adds mul_u64_u32_shr(), which is something we
> do a lot of with all the time manipulation, apply a multiplier to
> some u64 clock value.
>
> We can do better on some archs than we can in generic, so this
> interface could give a win there.

So I have no objection to the mul_u64_u32_shr() model, exactly because

- it doesn't actually use u128 anywhere (except perhaps internally,
but that is totally about the implementation, not visible anywhere
else).

- it is fundamentally optimizable especially on 32-bit architectures
where it doesn't need to do a full 64x64 multiply.

it's the *rest* of the "u128" math I really object to. I also wonder
about the u64xu64 math case for SCHED_DEADLINE, because I assume that
it doesn't actually end up using the 128-bit result in that form, but
scales it down again some way?

In other words, the thing I really object to is exactly the whole
"generic 128-bit math". That's the part that can easily get very
expensive in 32-bit environments. Even for the "u64xu64" multiply for
SCHED_DEADLINE, how could it possibly be true 64-bit values (even if
your "usec" was wrong, and it's "nsec").

At what point does the scheduler talk/think about billions of seconds
in nanoseconds? Seriously?

That's a perfect example of where "true 128-bit math" is potentially
stupidly expensive on 32-bit platforms, when a 48x48->96 bit multiply
might be cheaper. And if we're talking about some fixed-point
arithmetic, and the thing actually gets shifted down again (like the
mul_u64_u32_shr) so that the final result is actually guaranteed to
fit in (say) 64 bits, then that would be cheaper yet.

I realize that some people seem to think that being "generic" is
superior, and think that maybe somebody wants to do 128-bit arithmetic
for other things. And I think that is exactly the wrong way to think,
because it just encourages people to do exactly the wrong thing,
because "look, 128-bit arithmetic is easily available so I can do
fancy things", and then it just happens to go really fast on x86-64,
and then sucks everywhere else.

Linus


\
 
 \ /
  Last update: 2012-10-26 01:01    [W:0.264 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site