lkml.org 
[lkml]   [2003]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] ide write barrier support
On Thu, Oct 16 2003, Greg Stark wrote:
>
> "Mudama, Eric" <eric_mudama@Maxtor.com> writes:
>
> > It takes us multiple servo wedges to know that we think our write to the
> > media went in the right place, therefore by definition if we didn't already
> > have the next command's data, we've already missed our target location and
> > have to wait a full revolution to put the new data on the media. Since we
> > can't report good status for the flush until after we're sure the data is
> > down properly, we'll always blow a rev.
>
> Ok, on further thought. I think a write barrier isn't really what the database
> needs. It seems to be stronger and more resource intensive than what it really
> needs.
>
> Postgres writes a transaction log. When the client issues a commit postgres
> cannot return until it knows all the writes for the transaction log for that
> transaction have completed.
>
> Currently it issues an fsync which is already a bit stronger than necessary.
> But a write barrier sounds even stronger. It would block all other disk i/o
> until the fsync completes. This is completely unnecessary, it would prevent
> other transactions from proceeding at all until the commit finished.
>
> Ideally postgres just needs to call some kind of fsync syscall that guarantees
> it won't return until all buffers from the file that were dirty prior to the
> sync were flushed and the disk was really synced. It's fine for buffers that
> were dirtied later to get synced as well, as long as all the old buffers are
> all synced.

I've been thinking about adding WRITESYNC to do exactly that, and keep
WRITEBARRIER with its current functionality for journalled file
systems. WRITESYNC would be exactly what you describe, it just wont
imply any io scheduler ordering. So a post-flush would be enough to
handle that case.

The problem is that as far as I can see the best way to make fsync
really work is to make the last write a barrier write. That
automagically gets everything right for you - when the last block goes
to disk, you know the previous ones have already. And when the last
block completes, you know the whole lot is on platter. If you were just
using WRITESYNC, you would have to WRITESYNC all blocks in that range
instead of just WRITE WRITE WRITE ... WRITEBARRIER. So the barrier would
still end up being cheaper, unless the fsync just flushes a single page
in which case the WRITESYNC is enough.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:58    [W:0.162 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site