lkml.org 
[lkml]   [2015]   [Jul]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Kernel broken on processors without performance counters
On Thu, Jul 23, 2015 at 09:02:14PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 23, 2015 at 07:54:36PM +0200, Borislav Petkov wrote:
> > On Thu, Jul 23, 2015 at 07:08:11PM +0200, Peter Zijlstra wrote:
> > > That would be bad, how can we force it to emit 5 bytes?
> >
> > .byte 0xe9 like we used to do in static_cpu_has_safe().
>
> Like so then?
>
> static __always_inline bool arch_static_branch_jump(struct static_key *key, bool inv)
> {
> unsigned long kval = (unsigned long)key + inv;
>
> asm_volatile_goto("1:"
> ".byte 0xe9\n\t .long %l[l_yes]\n\t"
> ".pushsection __jump_table, \"aw\" \n\t"
> _ASM_ALIGN "\n\t"
> _ASM_PTR "1b, %l[l_yes], %c0 \n\t"
> ".popsection \n\t"
> : : "i" (kval) : : l_yes);
>
> return false;
> l_yes:
> return true;
> }

Yap.

But, we can do even better and note down what kind of JMP the compiler
generated and teach __jump_label_transform() to generate the right one.
Maybe this struct jump_entry would get a flags member or so. This way
we're optimal.

Methinks...

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.
--


\
 
 \ /
  Last update: 2015-07-24 07:41    [W:0.154 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site