lkml.org 
[lkml]   [2013]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 00/11] preempt_count rework -v3
On Tue, Sep 17, 2013 at 12:53:44PM +0200, Ingo Molnar wrote:
>
> * Peter Zijlstra <peterz@infradead.org> wrote:
>
> > These patches optimize preempt_enable by firstly folding the preempt and
> > need_resched tests into one -- this should work for all architectures. And
> > secondly by providing per-arch preempt_count implementations; with x86 using
> > per-cpu preempt_count for fastest access.
> >
> > These patches have been boot tested on CONFIG_PREEMPT=y x86_64 and survive
> > building a x86_64-defconfig kernel.
> >
> > text data bss filename
> > 11387014 1454776 1187840 defconfig-build/vmlinux.before
> > 11352294 1454776 1187840 defconfig-build/vmlinux.after
>
> That's a 0.3% size improvement (and most of the improvement is in
> hotpaths), despite GCC is being somewhat stupid about not allowing us to
> mark asm goto targets as cold paths and thus causes some unnecessary
> register shuffling in some cases, right?

I'm not entire sure where the bloat in 1/11 comes from; several
functions look like they avoid using stack variables for using more
registers which create more push/pop on entry/exit paths. Others I'm not
entirely sure of what happens with.

But it does look like the unlikely() thing still works, even with the
asm goto, you'll note that the call to schedule_preempt is out-of-line.


\
 
 \ /
  Last update: 2013-09-17 13:41    [W:0.438 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site