lkml.org 
[lkml]   [2000]   [Mar]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Some questions about linux kernel.

> The idea I had a few weeks ago to solve the problem and so to find out the
> hog (and that I'll experiment in real life 2.3.x soon) is to add a
> per-task page fault rate (ala avg_slice). Once we'll know the page fault
> rate and the time of the last fault per each process, we'll be almost able
> to find out the memory hog without possible mistakes and we won't need
> anything else.

That sounds interesting. I would be interested to see actual statistic
of the per task page faults.

Few things which come to my mind:

Some box with long uptime. init on it has been running since it
was booted up. So it is likely to accumulate a lot pagefault over time.
For example I have here this box with 484 days of uptime. Let say that
tomorrow "the evil program" starts, It might it take a while before it
accumulates more page faults than init. ... but I guess we don't need to
worry about this scenarios since I think you talk about fault RATE not
total faults.

How about the kwapd or syslogd they are going to trying to do
their jobs when system is hevaily loaded and are likely to generate lots
of page faults too. I think they could become potential targs too, just
because they are tying to do their job.


Just some thoughs. I'm NOT saying the idea is bad. I'm just CURIOUS what
the stats will be.


Adam

box:/home/adam# uptime
10:49pm up 484 days, 9:39, 16 users, load average: 1.01, 0.74, 0.34



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.195 / U:1.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site