lkml.org 
[lkml]   [1997]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.0.31
On Tue, 15 Jul 1997, Dr. Werner Fink wrote:
> > I'm using 27 and 29. I've tried 30 on a couple machines and had
> > nothing but trouble so I backed it off. These are heavily loaded
> > machines in an ISP, btw. In one case the '30 crashed about once
> > a day.
>
> Hmmm .. do you have more informations about this crash? Some kernel
> messages, hardware infos, something other important informations?
>
>

Unfortuneately not. I'll try to make a point of it the next time
I run tests. I hadn't subscribed to the list yet and in fact
didn't even know about this particular gathering until about 2 weeks
ago when I was overcome by madness and decided to work on a NeXT
port (yeah, still fighting with the cross compiler and the NeXT
unPosix libs - when I have a few hours spare that is). And not to
mention the problem that these are the sorts of systems that
*must* be up. 10 minutes offline and the phones start ringing. Makes
kernel upgrades a very stressful and "interesting" undertaking...

I promise I'll post you the details the next time around.

However, I can describe one problem I had with 2.0.27 that I
eventually solved in a Rube Goldberg fashion.

One machine, with this process set:

USER PID %CPU %MEM VSZ RSS TT STAT START TIME COMMAND
daemon 821 0.0 0.4 824 304 ? S 16:34 0:00
/usr/sbin/rpc.portm
mail 830 0.0 1.0 1156 644 ? S 16:34 0:00 /usr/sbin/exim
-bd
msql 858 0.0 0.9 1412 620 ? S 16:34 0:00 sh
/usr/sbin/run-ms
msql 882 1.4 2.6 2468 1660 ? S 16:34 2:18
/usr/sbin/msqld
root 1 0.2 0.5 812 324 ? S 16:33 0:24 init [2]
root 2 0.0 0.0 0 0 ? SW 16:33 0:00 kflushd
root 3 0.0 0.0 0 0 ? SW< 16:33 0:00 kswapd
root 4 0.0 0.0 0 0 ? SW 16:33 0:00 nfsiod
root 5 0.0 0.0 0 0 ? SW 16:33 0:00 nfsiod
root 6 0.0 0.0 0 0 ? SW 16:33 0:00 nfsiod
root 7 0.0 0.0 0 0 ? SW 16:33 0:00 nfsiod
root 13 0.0 0.3 788 240 ? S 16:33 0:00 update
root 151 0.0 0.6 832 416 ? S 16:34 0:02 /sbin/syslogd
-r
root 153 0.0 0.8 1004 516 ? S 16:34 0:00 /sbin/klogd
root 161 0.0 0.4 800 300 ? S 16:34 0:00 /sbin/kerneld
root 817 0.0 1.5 1492 992 ? S 16:34 0:02
/usr/sbin/gated
root 823 0.0 0.4 808 316 ? S 16:34 0:00
/usr/sbin/inetd
root 834 0.0 0.4 828 300 ? S 16:34 0:00 /usr/sbin/lpd
root 849 0.0 0.7 1020 476 ? S 16:34 0:03 /usr/sbin/sshd
root 855 0.0 0.5 832 368 ? S 16:34 0:00 /usr/sbin/cron
root 862 0.0 0.4 804 284 1 S 16:34 0:00 /sbin/getty
38400 t
root 863 0.0 0.4 804 284 2 S 16:34 0:00 /sbin/getty
38400 t
root 864 0.0 0.4 804 284 3 S 16:34 0:00 /sbin/getty
38400 t
root 865 0.0 0.4 804 284 4 S 16:34 0:00 /sbin/getty
38400 t
root 866 0.0 0.4 804 284 5 S 16:34 0:00 /sbin/getty
38400 t
root 867 0.0 0.4 804 284 6 S 16:34 0:00 /sbin/getty
38400 t
root 1818 0.1 2.0 1780 1296 ? S 18:30 0:03
/usr/sbin/named
root 2330 0.3 1.2 1048 776 ? S 19:09 0:00 /usr/sbin/sshd
root 2332 0.1 1.2 1452 808 p0 S 19:09 0:00 -bash
root 2336 0.0 0.6 908 420 p0 R 19:10 0:00 ps -aux

runs for approximately 24 hours. At the end of that time, it starts
slowing down. Tasks pile up and it grinds to a halt such that the
console can't even get it. There are no crashes or error reports.
I've watched a "top" on it for hours and watched the final death
thrash happen and there was not a thing to hang my hat on. It just
suddenly started using lots of VM after happily residing in core (oops,
showing my age) for most of a day. Then over a period of 15-20
minutes it would exponentially thrash itself to death.

No other machine shows the symptoms. Since these are mission critical
systems I can't run tests on them. We finally set up a hardware power
timer on the box and synchronized a crontab shutdown with it. Ugly,
but it keeps the systems running, which is my primary concern.

I have not yet dived into linux sources, but my own WAG would be that
some process is leaking memory, (which somehow doesn't seem to show
in the numbers on the top display). When it reaches a critical point,
the system slows down due to swapping to where it can't keep up with
the incoming email load (very high) and the process load goes up
since old ones are not completing as fast as new ones are coming in.
I tried throttling things that could be throttled, but that didn't
solve it either. I haven't an effing clue. And like I said, I can't
run tests because customers would have my hide on a flagpole...





\
 
 \ /
  Last update: 2005-03-22 13:39    [W:1.374 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site