lkml.org 
[lkml]   [2010]   [Mar]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: howto combat highly pathologic latencies on a server?
Date
On Thursday 11 March 2010, 01:15:14 Hans-Peter Jansen wrote:
> On Wednesday 10 March 2010, 19:15:48 Christoph Hellwig wrote:
> > On Wed, Mar 10, 2010 at 06:17:42PM +0100, Hans-Peter Jansen wrote:
> > > While this system usually operates fine, it suffers from delays, that
> > > are displayed in latencytop as: "Writing page to disk: 8425,5
> > > ms": ftp://urpla.net/lat-8.4sec.png, but we see them also in the
> > > 1.7-4.8 sec range: ftp://urpla.net/lat-1.7sec.png,
> > > ftp://urpla.net/lat-2.9sec.png, ftp://urpla.net/lat-4.6sec.png and
> > > ftp://urpla.net/lat-4.8sec.png.
> > >
> > > >From other observations, this issue "feels" like it is induced by
> > > > single
> > >
> > > syncronisation points in the block layer, eg. if I create heavy IO
> > > load on one RAID array, say resizing a VMware disk image, it can take
> > > up to a minute to log in by ssh, although the ssh login does not
> > > touch this area at all (different RAID arrays). Note, that the
> > > latencytop snapshots above are made during normal operation, not this
> > > kind of load..
> >
> > I had very similar issues on various systems (mostly using xfs, but
> > some with ext3, too) using kernels before ~ 2.6.30 when using the cfq
> > I/O scheduler. Switching to noop fixed that for me, or upgrading to a
> > recent kernel where cfq behaves better again.
>
> Christoph, thanks for this valuable suggestion: I've changed it to noop
> right away, and also:
>
> vm.dirty_ratio = 20
> vm.dirty_background_ratio = 1
>
> since the defaults of 40 and 10 seem to also not fit my needs. Even the
> 20 might be still oversized with 8GB total mem.

That was an bad idea. I've reverted the vm tweaks, as it turned things even
worser.

After switching to noop and activating lazy count on all filesystems, the
pathologic behavior with running io hooks seems to be relieved, but the
latency due to VMware-Server persists:

Cause Maximum Percentage
Writing a page to disk 435.8 msec 9.9 %
Writing buffer to disk (synchronous) 295.3 msec 1.6 %
Scheduler: waiting for cpu 80.1 msec 11.7 %
Reading from a pipe 9.3 msec 0.0 %
Waiting for event (poll) 5.0 msec 76.2 %
Waiting for event (select) 4.8 msec 0.4 %
Waiting for event (epoll) 4.7 msec 0.0 %
Truncating file 4.3 msec 0.0 %
Userspace lock contention 3.3 msec 0.0 %

Process vmware-vmx (7907) Total: 7635.8 msec
Writing a page to disk 435.8 msec 43.8 %
Scheduler: waiting for cpu 9.1 msec 52.7 %
Waiting for event (poll) 5.0 msec 3.5 %
[HostIF_SemaphoreWait] 0.2 msec 0.0 %

Although, I set writeThrough to "FALSE" on that VM, and it is operating on a
monolithic flat 24 GB "drive" file, it's not allowed to swap, and it's
itself only lightly used, it always writes (? whatever) synchronously and
trashes the latency of the whole system. (It's nearly always the one, that
latencytop shows, with combined latencies ranging from one to eigth secs.)

I would love to migrate that stuff to a saner VM technology (e.g. kvm), but
unfortunately, the Opteron 285 cpus are socket 940 based, and thus not
supported by any current para-virtualisation. Correct me, if I'm wrong,
please.

This VMware Server 1.0.* stuff is also getting in the way on trying to
upgrade to a newer kernel. The only way getting up the kernel stairs might
be VMware Server 2, but without serious indications, that it works way
better, I won't take that route. Hints welcome.

Upgrading the hardware combined with using ssd drives seems the only really
feasible approach, but given the economic preassure in the transport
industry, that's currently not possible, either.

Anyway thanks for your suggestions,
Pete


\
 
 \ /
  Last update: 2010-03-16 15:57    [W:0.072 / U:1.472 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site