Messages in this thread |  | | Subject | Re: Networking Status pre #2 | Date | Tue, 14 May 1996 08:31:55 +0200 | From | Axel Kohlmeyer <> |
| |
> Can people mail the list (not just me directly) if they are seeing the > following with 2.0 pre 2 yes, but ist 2.0 pre 3 already *sigh*.
> 1. double lock on device queue yes. here's a "fresh" exerpt from the syslog: May 13 23:22:24 rincewind kernel: double lock on device queue! May 13 23:56:32 rincewind kernel: double lock on device queue! May 14 02:39:45 rincewind kernel: double lock on device queue! May 14 06:21:37 rincewind kernel: double lock on device queue! seem these messages coincide with the mail-flood from vger coming in.
> 2. repeated socket destroy messages every 10 seconds they would go, as soon as i terminate xntpd. (seen them on every shutdown for a long time). [7|8:22] akohlmey@rincewind:~/compile> netstat -u Active Internet connections Proto Recv-Q Send-Q Local Address Foreign Address (State) User udp 0 0 rincewind.chemie.u:721 temptc.chemie.uni:2049 ESTABLISHED udp 0 0 localhost:718 localhost:amd ESTABLISHED udp 0 0 localhost:ntp *:* udp 0 1260 rincewind.chemie.u:ntp *:* ^^^^ and here's the number that will come with it. btw: i can ping hosts, that are actually down, get a few socket destroy delayed msgs, and then it like before. this is how it _should_ be, right?
> 7. TCP performance problems. > 8. bogus packet size/mismatched pointers reports on 8390 based ethernet > drivers. since the smc-driver update my networking is faster than ever.
> I want to see how bad these problems actually are and how many people are > seeing them - also hopefully a pattern so try and include hardware info. i486-33MHz, ISA-bus, SMC-Ultra, 32MB, Mitsumi FX001D, SoundBlaster 16, 1.28 Gig Quantum and 540Meg Maxtor
Axel.
========================================================================= Axel Kohlmeyer email: axel.kohlmeyer@chemie.uni-ulm.de Abteilung fuer Theoretische Chemie Universitaet Ulm D-89069 Ulm/Donau =========================================================================
|  |