lkml.org 
[lkml]   [2000]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectWarning, user cutting in: Kernel 2.2.14, dirty buffers, stalls in performance ...
Date
Warning, user cutting in:  Kernel 2.2.14, dirty buffers, stalls in performance ...

Hi you'all. Sorry to cut in, but I've been trying to track down
the answers to some Linux fileserver performance issues over the
last week. I have tried various LUG lists (ELUG, LEAP-CF, SVLUG),
the NFS lists (which solved my NFS stability issue), Linux MM mail
lists, etc... I appreciate any advise, help, etc... that anyone
can give me at this point. Again, sorry to bother with such
obvious user-issues here on the kernel list.

First off, fileserver specs:

Asus P2B-F mainboard (i440BX), Pentium-III 500MHz, 1GB RAM, ICP
Vortex GDT-series RAID controller w/64MB, 5x36GB IBM 7200rpm
U2W-SCSI (~100GB usable). RedHat 6.0 initially, partial upgrade to
6.1 (kernel, sysinit, etc...), kernel 2.2.14-5.0 (from Rawhide, but
now I see it is the RH62 kernel as well), nfs-utils 0.1.7 (went
to 2.2.14-5.0/nfs-utils-0.1.7 for stable NFS v2 serving, which it
has been).

My issue is this ...

Every hour or half hour, my sole fileserver (specs above) becomes
heavily drenched with massive I/O in a dirty buffer flush. It
takes approximately 15-60 seconds. The whole time, the ICP Vortex
gdtmon program shows ~30-50% dirty buffers (constantly being fed by
the OS). When this happens, my console and all my NFS clients' (10
Solaris 2.6, 2 Linux 2.2) interactive sessions become almost
complete non-responsive (NT/98 to Samba are not). Again, even the
system is barely useable.

Yet top reports a processor 95% (or better, usually 97%) idle.

What I have tried to do:

I have tried to gather all the info I can on the VM system in Linux
2.2[.14]. I have tried to piece togther how bdflush/kflushd (I
take it bdflush is obsolete?), kupdate, kswap, etc... all work.
I'm kinda lost, although I'm starting to read the kernel source ;->.

The system is rock solid (i.e. no crashes, or complete stalls).
Just the UNIX performance issues remain. I haven't totally ruled
out some 98/NT client viruses/crap on my network (we have a few,
regular eye-candy victims here), but that would usually show up as
SMB traffic/daemon processes -- and my network is 100Mbps switched
at every node (~50, server, 2 Linux boxes, 10 Solaris 2.6 boxes, ~40
98/NT systems).

I have most recently run /sbin/update -f 5 to update every 5
seconds and that seems to have helped some (no more 15-60 second
near-stalls). The docs on my system still say its for bdflush
(again, obsolete?). I see kflushd, kupdate and kswap running on my
system. Again, total newbie here to how the VM system works in
Linux.

Questions:

[ Before answering these, understand I cannot take our system down
unless absolutely necessary. We are running 24x7 right now and
unless I know something is going to work, I really cannot afford
"trial and error" at this point. Thanx ... ]

1. How to I "peak" at how the OS' I/O is doing? Something like
"top" only for I/O? Any tools anyone knows off for Linux?

2. Any tweaks to the VM system where this near-stall is occuring?
What about my /sbin/update -f 5 command above?

3. When I run /sbin/update -d, I get this:

[root@thor /root]# update -d
bdflush version 1.4
0: 40 Max fraction of LRU list to examine for dirty blocks
1: 500 Max number of dirty blocks to write each time bdflush activated
2: 64 Num of clean buffers to be loaded onto free list by refill_freelist
3: 256 Dirty block threshold for activating bdflush in refill_freelist
4: 500 Percentage of cache to scan for free clusters
5: 3000 Time for data buffers to age before flushing
6: 500 Time for non-data (dir, bitmap, etc) buffers to age before flushing
7: 1884 Time buffer cache load average constant
8: 2 LAV ratio (used to determine threshold for buffer fratricide).

Is that even the right binary for RedHat 6.1? I mean, I have
installed the RPM for bdflush-1.5. Heck, even the new RedHat 6.2
is coming with the same RPM, while Caldera OpenLinux seems to come
with bdflush-1.62?!?!

2. Is the inefficiency of NFS v2 hurting my performance? Or
bringing down my Solaris clients? Comments?

4. I really think the hardware is the ultimate culprit (I didn't
design or pick it either). I mean, that ICP Vortex controller is
probably stalling the entire, single PCI bus (single PCI on the
i440BX chipset) on these writes. We are hammering this server hard
over our 100Mbps network (totally switched here). IMHO, the
network and RAID controllers should be on separate PCI busses
(AGP network/RAID anyone ;-). Anyone concur with this?

[ BTW, I'm looking at a 450NX mainboard (dual PCI bus, quad
Xeon -- they are getting cheap nowdays). But, again, I cannot do
anything until we are out of a "production mode" sometime this
summer. ]

Thanx in advance all ...

-- Bryan "TheBS" Smith
Theseus Logic, Inc.

--
Bryan "TheBS" Smith -- Engineer, IT Professional and Hacker
E-mail: mailto:thebs@theseus.com,b.j.smith@ieee.org
Disclaimer: http://www.SmithConcepts.com/legal.html
*************************************************************
TheBS ... Serving E-mail filters to /dev/null since 1989


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.078 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site