lkml.org 
[lkml]   [2015]   [Apr]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[RFC PATCH] x86/asm/irq: Don't use POPF but STI

* Andy Lutomirski <luto@kernel.org> wrote:

> > Another different approach would be to formally state that
> > pv_irq_ops.save_fl() needs to return all the flags, which would
> > make local_irq_save() safe to use in this circumstance, but that
> > makes a hotpath longer for the sake of a single boot time check.
>
> ...which reminds me:
>
> Why does native_restore_fl restore anything other than IF? A branch
> and sti should be considerably faster than popf.

Yes, this has come up in the past, something like the patch below?

Totally untested and not signed off yet: because we'd first have to
make sure (via irq flags debugging) that it's not used in reverse, to
re-disable interrupts:

local_irq_save(flags);
local_irq_enable();
...
local_irq_restore(flags); /* effective local_irq_disable() */

I don't think we have many (any?) such patterns left, but it has to be
checked first. If we have such cases then we'll have to use different
primitives there.

But this patch should be good enough to give an good overview of the
effects: the text impact does not look too horrible, if we decide to
do this. The bloat of +1K on x86 defconfig is better than I first
feared.

Thanks,

Ingo

======================>
From 6f01f6381e8293c360b7a89f516b8605e357d563 Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@kernel.org>
Date: Tue, 21 Apr 2015 13:32:13 +0200
Subject: [PATCH] x86/asm/irq: Don't use POPF but STI

So because the POPF instruction is slow and STI is faster on
essentially all x86 CPUs that matter, instead of:

ffffffff81891848: 9d popfq

we can do:

ffffffff81661a2e: 41 f7 c4 00 02 00 00 test $0x200,%r12d
ffffffff81661a35: 74 01 je ffffffff81661a38 <snd_pcm_stream_unlock_irqrestore+0x28>
ffffffff81661a37: fb sti
ffffffff81661a38:

This bloats the kernel a bit, by about 1K on the 64-bit defconfig:

text data bss dec hex filename
12258634 1812120 1085440 15156194 e743e2 vmlinux.before
12259582 1812120 1085440 15157142 e74796 vmlinux.after

the other cost is the extra branching, adding extra pressure to the
branch prediction hardware and also potential branch misses.
---
arch/x86/include/asm/irqflags.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index b77f5edb03b0..8bc2a9bc7a06 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -69,7 +69,8 @@ static inline notrace unsigned long arch_local_save_flags(void)

static inline notrace void arch_local_irq_restore(unsigned long flags)
{
- native_restore_fl(flags);
+ if (likely(flags & X86_EFLAGS_IF))
+ native_irq_enable();
}

static inline notrace void arch_local_irq_disable(void)


\
 
 \ /
  Last update: 2015-04-21 15:21    [W:0.115 / U:1.588 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site