Messages in this thread | | | Date | Wed, 12 Aug 2015 20:29:34 -0700 | Subject | get_vmalloc_info() and /proc/meminfo insanely expensive | From | Linus Torvalds <> |
| |
I just did some profiling of a simple "make test" in the git repo, and was surprised by the top kernel offender: get_vmalloc_info() showed up at roughly 4% cpu use.
It turns out that bash ends up reading /proc/meminfo on every single activation, and "make test" is basically just running a huge collection of shell scripts. You can verify by just doing
strace -o trace sh -c "echo"
to see what bash does on your system. I suspect it's actually glibc, because a quick google finds the function "get_phys_pages()" that just looks at the "MemTotal" line (or possibly get_avphys_pageslooks at the MemFree" line).
Ok, so bash is insane for caring so deeply that it does this regardless of anything else. But what else is new - user space does odd things. It's like a truism.
My gut feel for this is that we should just rate-limit this and cache the vmalloc information for a fraction of a second or something. Maybe we could expose total memory sizes in some more efficient format, but it's not like existing binaries will magically de-crapify themselves, so just speeding up meminfo sounds like a good thing.
Maybe we could even cache the whole seqfile buffer - Al? How painful would something like that be? Although from the profiles, it's really just the vmalloc info gathering that shows up as actually wasting CPU cycles..
Linus
| |