lkml.org 
[lkml]   [1997]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Kernel Internals
Date
From
In message <01BC0F9C.3C650720@jmohr.blitz.de>,James Mohr writes:
>
> >In article <01BC07E5.06DD4180@jmohr.blitz.de> James Mohr <jimmo@blitz.net> w
> rites:
[Questions about pid to process mapping]

Yes, Linux does not do this in "a particularly clever way".
As you say, lookups involve traversing on average, half of the task
array., e.g. from kill_proc()

for_each_task(p) {
if (p && p->pid == pid)
return send_sig(sig,p,priv);
}

Similarly, the get_pid() routine starts at the next pid modulo MAX_PID,
and then checks every task to see if it is in use. If so it repeats.

This is not a major problem for the majority of Linux installations given
the number of processes that are active at a given time. At the other
end of the scale DYNIX/ptx (Sequent's SMP Unix) uses as hash table for
fast PID lookup, a bitmap for fast allocation etc. This is significantly
more complicated and only worthwhile because there are systems with
10,000-20,000 active processes !

[Query about bdflush vs. update and why is it in two pieces.]

I'll let Stephen Tweedie answer this one. Quoting from his mail of
Jan 15 last year,
"
bdflush has always been a twin-forked system. There was a kernel
fork, which just sat around waiting for the kernel to run low on free
buffers, setting off a disk sync when that happenned; and a user fork,
which did the old job of the update(8) daemon, setting off a full disk
update every so often.

As of 1.3.5x, the kernel fork is fully integrated into the normal
kernel. The mechanism by which it used to be invoked is now a no-op,
so when /sbin/bdflush tries to start it it will simply return and exit
cleanly. The other fork created by /sbin/bdflush is *still required*.
So, DON'T remove it.

The new 1.3.5x series is fully bdflush-compatible with older kernels.
You don't need to change your configuration to run it.

Cheers,
Stephen.
"

[Q. about reaping of dead children...]

> Here again, that wasn't the question. I know when the process table entry is
> cleared, I know what is kept in the the process table entry after the process
> dies, and I know what happens when there is no parent waiting on the child.
> So, to put the question as clear as I can:
>
> Is it true that the parent process is responsible for clearing the process ta
> ble of child processes or is this done by init or some other process, kernel
> function, whatever?
>

The exit status of the process is kept in the process table until the
process is waited for. Until this happens it is a zombie. The exit status
has to be kept somewhere. It's kept in the process table (task).

The process remains in the zombie state until its parent waits for it.
If the parent hangs around and doesn't call wait, the zombie remains.
You will sometimes see many zombies on a system due to a badly written
application.

If the parent dies without waiting for the child, the child is inherited
by init, and init is written to regularly perform wait() (or a variant
thereof) to clean up the child. It is the system call that does the work,
so it is done *by* the kernel, on behalf of a user process.

> >When I do a ps, I see that more than open process is waiting on
> >read_chan. No problem. What annoys me is that when I look at the
> >numeric output for the WCHAN, they are all the same one. Other
> >UNIXes will have a different WCHAN for each tty that is being
> > waited upon. Therefore, the number here is different.
>
> >That is a statement, not a question. There is probably no reason for
> >WCHAN values to be handled similarly in different Unixes.
>
> Wait channels are essentially the address of the routine that the process was
> at when it went to sleep. The values *are* handled the same in different Uni
> xes as all UNIX (that I know of) use wait channels and they serve the same fu
> nction. The question is "why would the WCHAN be the same for all processes?"
>

OK, there are several misconceptions here.

First, not all Unixes use wait channels at all. SMP versions of Unix tend to
use spinlocks and semaphores. The equivalent of the wchan for these is usually
the address of the semaphore on which the process is sleeping.

For those traditional Unixes which use sleep() and wakeup(), the value passed
is simply a number, nothing more, nothing less. The only thing the implementor
has to guarantee is that each event has a unique number. The usual way that
this is achieved is to use the address of the object which is normally a
data structure. One golden rule if you ever expect to get SMP scalability
is "lock data, NOT code". So I would expect to see a different WCHAN for
each getty on a traditional UNIX, since each process is waiting for the
read "queue" of the associated tty to be non empty. Processes calling
sleep record the wchan in their struct proc and wakeup() trawls through
the proc table looking for matching sleeping processes with a WCHAN that
matches.

Linux uses wait queues which are quite different. The reason that you have
the same "WCHAN" for all the virtual consoles is because they are all in
read_chan() and the address is the same. This is the address in the code,
not the address of the data structure which is what you would see in e.g.
a System V Release 3 system.

t

--
Tim Wright, Worldwide Technical Services, | Email: timw@sequent.com
Sequent Computer Systems Inc., 15450, |
SW Koll Parkway, Beaverton, Oregon 97006 | Phone: +1-503-578-3822
"Applying computer technology is simply finding the right wrench to pound
in the correct screw"



\
 
 \ /
  Last update: 2005-03-22 13:38    [W:0.057 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site