[lkml]   [2001]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: 2.4.6-pre2, pre3 VM Behavior
On Thu, 14 Jun 2001, Roger Larsson wrote:

> On Thursday 14 June 2001 10:47, Daniel Phillips wrote:
> > On Thursday 14 June 2001 05:16, Rik van Riel wrote:
> > > On Wed, 13 Jun 2001, Tom Sightler wrote:
> > > > Quoting Rik van Riel <>:
> > > > > After the initial burst, the system should stabilise,
> > > > > starting the writeout of pages before we run low on
> > > > > memory. How to handle the initial burst is something
> > > > > I haven't figured out yet ... ;)
> > > >
> > > > Well, at least I know that this is expected with the VM, although I do
> > > > still think this is bad behavior. If my disk is idle why would I wait
> > > > until I have greater than 100MB of data to write before I finally
> > > > start actually moving some data to disk?
> > >
> > > The file _could_ be a temporary file, which gets removed
> > > before we'd get around to writing it to disk. Sure, the
> > > chances of this happening with a single file are close to
> > > zero, but having 100MB from 200 different temp files on a
> > > shell server isn't unreasonable to expect.
> >
> > This still doesn't make sense if the disk bandwidth isn't being used.
> >
> It does if you are running on a laptop. Then you do not want the pages
> go out all the time. Disk has gone too sleep, needs to start to write a few
> pages, stays idle for a while, goes to sleep, a few more pages, ...

True, you'd never want data trickling to disk on a laptop on battery.
If you have to write, you'd want to write everything dirty at once to
extend the period between disk powerups to the max.

With file IO, there is a high probability that the disk is running
while you are generating files temp or not (because you generally do
read/write, not ____/write), so that doesn't negate the argument.

Delayed write is definitely nice for temp files or whatever.. until
your dirty data no longer comfortably fits in ram. At that point, the
write delay just became lost time and wasted disk bandwidth whether
it's a laptop or not. The problem is how do you know the dirty data
is going to become too large for comfort?

One thing which seems to me likely to improve behavior is to have two
goals. One is the trigger point for starting flushing, the second is
a goal below the start point so we define a quantity which needs to be
flushed to prevent us from having to flush again so soon. Stopping as
soon as the flush trigger is reached means that we'll reach that limit
instantly if the box is doing any writing.. bad news for the laptop and
not beneficial to desktop or server boxen. Another benefit of having
two goals is that it's easy to see if you're making progress or losing
ground so you can modify behavior accordingly.

Rik mentioned that the system definitely needs to be optimized for read,
and just thinking about it without posessing much OS theory, that rings
of golden truth. Now, what does having much dirt lying around do to
asynchronous readahead?.. it turn it synchronous prematurely and negates
the read optimization.


(That's why I mentioned having tried a clean shortage, to ensure more
headroom for readahead to keep it asynchronous longer. Not working the
disk hard 'enough' [define that;] must harm performance by turning both
read _and_ write into strictly synchronous operations prematurely)

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

 \ /
  Last update: 2005-03-22 13:17    [W:0.399 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site