lkml.org 
[lkml]   [2004]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [BENCHMARK] nproc: netlink access to /proc information
On Sun, 29 Aug 2004 09:05:42 -0700, William Lee Irwin III wrote:
>> Okay, these explain some of the difference. I usually see issues with
>> around 10000 processes with fully populated virtual address spaces and
>> several hundred vmas each, varying between 200 to 1000, mostly
>> concentrated at somewhere just above 300.

On Sun, Aug 29, 2004 at 07:02:48PM +0200, Roger Luethi wrote:
> I agree, that should make quite a difference. As you said, we are
> working on orthogonal areas: My current focus is on data delivery (sane
> semantics and minimal overhead), while you seem to be more interested
> in better data gathering.

Yes, there doesn't seem to be any conflict between the code we're
working on. These benchmark results are very useful for quantifying the
relative importance of the overheads under more typical conditions.


On Sun, Aug 29, 2004 at 07:02:48PM +0200, Roger Luethi wrote:
> I profiled "top -d 0 -b > /dev/null" for about 100 and 10^5 processes.
> When monitoring 100 (real-world) processes, /proc specific overhead
> (_IO_vfscanf_internal, number, __d_lookup, vsnprintf, etc.) amounts to
> about one third of total resource usage.
> ==> 100 processes: top -d 0 -b > /dev/null <==
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % image name symbol name
> 20439 12.2035 libc-2.3.3.so _IO_vfscanf_internal
> 15852 9.4647 vmlinux number
> 11635 6.9469 vmlinux task_statm
> 9286 5.5444 libc-2.3.3.so _IO_vfprintf_internal
> 9128 5.4500 vmlinux proc_pid_stat

Lexical analysis is cpu-intensive, probably due to the cache misses
taken while traversing the strings. This is likely inherent in string
processing interfaces.


On Sun, Aug 29, 2004 at 07:02:48PM +0200, Roger Luethi wrote:
> With 10^5 additional dummy processes, resource usage is dominated by
> attempts to get a current list of pids. My own benchmark walked a list
> of known pids, so that was not an issue. I bet though that nproc can
> provide more efficient means to get such a list than getdents (we could
> even allow a user to ask for a message on process creation/kill).
> So basically that's just another place where nproc-based tools would
> trounce /proc-based ones (that piece is vaporware today, though).
> ==> 10000 processes: top -d 0 -b > /dev/null <==
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % image name symbol name
> 35855 36.0707 vmlinux get_tgid_list
> 9366 9.4223 vmlinux pid_alive
> 7077 7.1196 libc-2.3.3.so _IO_vfscanf_internal
> 5386 5.4184 vmlinux number
> 3664 3.6860 vmlinux proc_pid_stat

get_tgid_list() is a sad story I don't have time to go into in depth.
The short version is that larger systems are extremely sensitive to
hold time for writes on the tasklist_lock, and this being on scales
not needing SGI participation to tell us (though scales beyond personal
financial resources still).


On Sun, Aug 29, 2004 at 07:02:48PM +0200, Roger Luethi wrote:
> The remaining profiles are for two benchmarks from my previous message.
> Field computation is more prominent than with top because the benchmark
> uses a known list of pids and parsing is kept at a trivial level.
> ==> /prod/pid/statm (2x) for 10000 processes <==
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % image name symbol name
> 7430 9.9485 libc-2.3.3.so _IO_vfscanf_internal
> 6195 8.2948 vmlinux __d_lookup
> 5477 7.3335 vmlinux task_statm
> 5082 6.8046 vmlinux number
> 3227 4.3208 vmlinux link_path_walk

scanf() is still very pronounced here; I wonder how well-optimized
glibc's implementation is, or if otherwise it may be useful to
circumvent it with a more specialized parser if its generality
requirements preclude faster execution.


On Sun, Aug 29, 2004 at 07:02:48PM +0200, Roger Luethi wrote:
> nproc removes most of the delivery overhead so field computation is
> now dominant. Strictly speaking, it should be even higher because the
> benchmarks requests the same fields three times, but they only get
> computed once in such a case.
> ==> 27 nproc fields for 10000 processes, one process per request <==
> CPU: CPU with timer interrupt, speed 0 MHz (estimated)
> Profiling through timer interrupt
> samples % image name symbol name
> 7647 25.0894 vmlinux __task_mem
> 2125 6.9720 vmlinux find_pid
> 1884 6.1813 vmlinux nproc_pid_fields
> 1488 4.8820 vmlinux __task_mem_cheap
> 1161 3.8092 vmlinux mmgrab

It looks like I'm going after the right culprit(s) for the lower-level
algorithms from this.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:05    [W:1.225 / U:1.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site