[lkml]   [2009]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: Linux 2.6.29
On Tue, 24 Mar 2009 09:20:32 -0400
Theodore Tso <> wrote:
> They don't solve the problem where there is a *huge* amount of writes
> going on, though --- if something is dirtying pages at a rate far
> greater than the local disk can write it out, say, either "dd
> if=/dev/zero of=/mnt/make-lots-of-writes" or a massive distcc cluster
> driving a huge amount of data towards a single system or a wget over a
> local 100 megabit ethernet from a massive NFS server where everything
> is in cache, then you can have a major delay with the fsync().

You make it sound like this is hard to do... I was running into this
problem *every day* until I moved to XFS recently. I'm running a
fairly beefy desktop (VMware running a crappy Windows install w/AV junk
on it, builds, icecream and large mailboxes) and have a lot of RAM, but
it became unusable for minutes at a time, which was just totally
unacceptable, thus the switch. Things have been better since, but are
still a little choppy.

I remember early in the 2.6.x days there was a lot of focus on making
interactive performance good, and for a long time it was. But this I/O
problem has been around for a *long* time now... What happened? Do not
many people run into this daily? Do all the filesystem hackers run
with special mount options to mitigate the problem?

Jesse Barnes, Intel Open Source Technology Center

 \ /
  Last update: 2009-03-25 00:07    [from the cache]
©2003-2014 Jasper Spaans. hosted at Digital Ocean