lkml.org 
[lkml]   [2020]   [Oct]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [GIT PULL] RCU changes for v5.10

* Linus Torvalds <torvalds@linux-foundation.org> wrote:

> On Mon, Oct 12, 2020 at 7:14 AM Ingo Molnar <mingo@kernel.org> wrote:
> >
> > Please pull the latest core/rcu git tree from:
> >
> > RCU changes for v5.10:
> >
> > - Debugging for smp_call_function()
> > - RT raw/non-raw lock ordering fixes
> > - Strict grace periods for KASAN
> > - New smp_call_function() torture test
> > - Torture-test updates
> > - Documentation updates
> > - Miscellaneous fixes
>
> I am *very* unhappy with this pull request.
>
> It doesn't even mention the big removal of CONFIR_PREEMPT, that I felt
> was still under discussion.

Not mentioning the unconditional PREEMPT_COUNT enabling aspect was 100% my
fault in summarizing the changes insufficiently, as I (mistakenly) thought
them to be uncontroversial. My apologies for that!

Here's a second attempt to properly justify these changes:

Regarding the performance aspect of the change, I was relying on these
performance measurements:

"Freshly conducted benchmarks did not reveal any measurable impact from
enabling preempt count unconditionally. On kernels with
CONFIG_PREEMPT_NONE or CONFIG_PREEMPT_VOLUNTARY the preempt count is only
incremented and decremented but the result of the decrement is not
tested. Contrary to that enabling CONFIG_PREEMPT which tests the result
has a small but measurable impact due to the conditional branch/call."

FWIW, to inject some hard numbers into this discussion, here's also the
code generation impact of an unconditional PREEMPT_COUNT, on x86-defconfig:

text data bss filename
19675937 5591036 1433672 vmlinux.ubuntu.vanilla # 856deb866d16: ("Linux 5.9-rc5")
19682382 5590964 1425480 vmlinux.ubuntu.PREEMPT_COUNT=y # 7681205ba49d: ("preempt: Make preempt count unconditional")

So this is a pretty small, +0.03% increase (+6k) in generated code in the
core kernel, and it doesn't add widespread new control dependencies either.

I also measured the core kernel code generation impact on the kernel config
from a major Linux distribution that uses PREEMPT_VOLUNTARY=y (Ubuntu):

kepler:~/tip> grep PREEMPT .config
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPT_NOTIFIERS=y

text data bss filename
15754341 13790786 5242880 vmlinux.ubuntu.vanilla # 856deb866d16: ("Linux 5.9-rc5")
15754790 13791018 5242880 vmlinux.ubuntu.PREEMPT_COUNT=y # 7681205ba49d: ("preempt: Make preempt count unconditional")
15754771 13791018 5242880 vmlinux.ubuntu.full_cleanups # 849b9c5446cc: ("kvfree_rcu(): Fix ifnullfree.cocci warnings")

In this test the changes result in very little generated code increase in
the core kernel, just +449 bytes, or +0.003%.

In fact the impact was so low on this config that I initially disbelieved
it and double-checked the result and re-ran the build with all =m's turned
into =y's, to get a whole-kernel measurement of the generated code impact:

text data bss filename
84594448 61819613 42000384 vmlinux.ubuntu.vanilla # 856deb866d16: ("Linux 5.9-rc5")
84594129 61819777 42000384 vmlinux.ubuntu.PREEMPT_COUNT=y # 7681205ba49d: ("preempt: Make preempt count unconditional")

Note how the full ~84 MB image actually *shrunk*, possibly due to random
function & section alignment noise.

So to get a truly sensitive measurement of the impact of the PREEMPT_COUNT
change I built with CONFIG_CC_OPTIMIZE_FOR_SIZE=y, to get tight instruction
packing and no alignment padding artifacts:

text data bss filename
69460329 60932573 40411136 vmlinux.ubuntu.vanilla # 856deb866d16: ("Linux 5.9-rc5")
69460739 60936853 40411136 vmlinux.ubuntu.PREEMPT_COUNT=y # 7681205ba49d: ("preempt: Make preempt count unconditional")

This shows a 410 bytes (+0.0005%) increase.

( Side note: it's rather impressive that -Os saves 21% of text size - if
only GCC wasn't so stupid with the final 2-3% size optimizations... )

So there's even less relative impact on the whole 84 MB kernel image -
modules don't do much direct preempt_count manipulation.

Just for completeness' sake I re-ran the original defconfig build as well,
this time with -Os:

text data bss filename
16091696 5565988 2928696 vmlinux.defconfig.Os.vanilla # 856deb866d16: ("Linux 5.9-rc5")
16095525 5570156 2928696 vmlinux.defconfig.Os.PREEMPT_COUNT=y # 7681205ba49d: ("preempt: Make preempt count unconditional")

3.8k, or +0.025% - similar to the initial +0.03% result.

So even though I'm normally fiercely anti-bloat, if we combine the
performance and code generation measurements with these maintainability
arguments:

"It's about time to make essential functionality of the kernel consistent
across the various preemption models.

Enable CONFIG_PREEMPT_COUNT unconditionally. Follow up changes will
remove the #ifdeffery and remove the config option at the end."

I think the PREEMPT_COUNT=y change to reduce the schizm between the various
preemption models is IMHO justified - and reducing the code base distance
to -rt is the icing on the cake.

Thanks,

Ingo

\
 
 \ /
  Last update: 2020-10-13 09:32    [W:0.100 / U:0.828 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site