lkml.org 
[lkml]   [2011]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectHow to measure enable_kernel_fpu overhead?
I'm working on some crypto primitives, and have MMX and SSE2 accelerated
versions. I plan on writing AltiVec (PPC) and NEON (ARM) ones, too.

But the performance varies, so I think I need to do some run-time
benchmarking, like the RAID6 code.

But what's even more annoying is that unlike the RAID6 code, I can't
assume I'll alawys be working on large blocks. So it's not so much about
choosing one version as choosing a size threshold at which to switch over.

Which leads us into the overhead of enable_kernel_fpu(),
enable_kernel_altivec(), and whatever the ARM equivalent is.
(Um, did a little searching... does it even exist?)

To complicate it a little more, there are at two timing cases, depending
on the value of current_thread_info()->status & TS_USEDFPU. (For PowerPC,
it's current->thread.regs->msr & MSR_VEC.)

There may be additional timing variation depending on how clever XSAVE
is with e.g. the high half of the ymm registers.


So I think the thing to do is benchmark a few different sizes in the two
timing cases and fit a linear function to the results. (More simply,
subtract the timings from the integer code and find the X-intercept of
that function.)

But I'm not sure how to create, a suitably dirty user FPU state.
Especially as it might be early boot and there might not be any user
processes yet to save it to.

Does anyone have any suggestions?

Thank you!


\
 
 \ /
  Last update: 2011-06-03 19:29    [W:0.058 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site