lkml.org 
[lkml]   [2001]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: page_launder() on 2.4.9/10 issue
Date
On August 28, 2001 08:17 pm, Linus Torvalds wrote:
> On Tue, 28 Aug 2001, Daniel Phillips wrote:
> > On August 28, 2001 05:36 am, Marcelo Tosatti wrote:
> > > Linus,
> > >
> > > I just noticed that the new page_launder() logic has a big bad problem.
> > >
> > > The window to find and free previously written out pages by
> > > page_launder() is the amount of writeable pages on the inactive dirty
> > > list.
>
> No.
>
> There is no "window". The page_launder() logic is very clear - it will
> write out any dirty pages that it finds that are "old".
>
> > > We'll keep writing out dirty pages (as long as they are available) even
> > > if have a ton of cleaned pages: its just that we don't see them because
> > > we scan a small piece of the inactive dirty list each time.
>
> So? We need to write them out at some point anyway. Isn't it much better
> to be graceful about it, and allow the writeout to happen in the
> background. The way things _used_ to work, we'd delay the write-out until
> we REALLY had to, which is great for dbench, but is really horrible for
> any normal load.

I thought about it a lot and I had a really hard time coming up with examples
where starting writeout early is not the right thing to do. Even write
merging takes care of itself because if the system is heaviliy loaded the
queue will naturally back up and create all the write merging opportunities
we need. Temporary file deletion is hurt by early writeout, yes, but that is
really something we should be handling at the filesystem level, not the vfs.
(According to this theory, XFS with its delayed allocation should be a star
performer on dbench.)

The only case I can see where early writeout is not necessarily the best
policy is when we have lots of input going on at the same time. The classic
example is program startup. If there are lots of inactive/clean pages we
want to hold off writeout until the swap-in activity due to program start
winds down or eats all the inactive/clean pages.

> Think about it - do you really want the system to actively try to reach
> the point where it has no "regular" pages left, and has to start writing
> stuff out (and wait for them synchronously) in order to free up memory? I
> strongly feel that the old code was really really wrong - it may have been
> wonderful for throughput, but it had non-repeatable behaviour, and easily
> caused the inactive_dirty list to fill up with dirty pages because it
> unfairly penalized clean pages.

It was just plain wrong. We got sucked into the trap of optimizing for
dbench.

> [...]
> > > With asynchronous i_dirty->i_clean movement (moving a cleaned page to
> > > the clean list at the IO completion handler. Please don't consider that
> > > for 2.4 :)) this would not happen, too.
> >
> > Or we could have parallel lists for dirty and clean.
>
> Well, more importantly, do you actually have good reason to believe that
> it is wrong to try to write things out asynchronously?

Asynchronous is good, but we don't want to blindly submit every dirty page as
soon as it arrives on the inactive_dirty list. This will throw away
information about the short-term activity of pages, without which we have no
means of distinguishing between LFU and LRU pages. It doesn't matter under
light disk load because... the load is light (duh) but under heavy load it
does matter.

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:01    [W:0.290 / U:1.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site