lkml.org 
[lkml]   [2005]   [Apr]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: ext3 allocate-with-reservation latencies
From
Date
Hi,

On Tue, 2005-04-12 at 07:41, Mingming Cao wrote:

> > Note that this may improve average case latencies, but it's not likely
> > to improve worst-case ones. We still need a write lock to install a new
> > window, and that's going to have to wait for us to finish finding a free
> > bit even if that operation starts using a read lock.
> >
> Yes indeed. However nothing is free and there are always trade-offs.:)
>
> By worse case you mean multiple writes trying to allocate blocks around
> same area?

It doesn't matter where they are; multiple new file opens will all be
looking for a write lock. You only need one long-held read lock and all
the writers still block. The worst-case latencies can't be properly
solved with r/w locks --- those let the readers go more quickly
(assuming they are in the majority), which helps the average case, but
writers still have to wait for exclusive access. We only really help
them by dropping the lock entirely.

> Even if we take out the whole
> reservation, we still possibility run into this kind of latency: the
> bitmap on disk and on journal are extremely inconsistent so we need to
> keep searching them both until we find a free bit on both map.

Quite possibly. But as long as that code is running without locks, it's
much easier to deal with those latencies: they won't impact other CPUs,
cond_resched() is easier, and there's even CONFIG_PREEMPT.

> > I'm not really sure what to do about worst-case here. For that, we
> > really do want to drop the lock entirely while we do the bitmap scan.

> Hmm...if we drop the lock entirely while scan the bitmap, assuming you
> mean drop the read lock, then I am afraid we have to re-check the tree
> (require a read or write lock ) to see if the new window space is still
> there after the scan succeed.

Sure. You basically start off with a provisional window, and then if
necessary roll it forward just the same way you roll normal windows
forward when they get to their end. That means you can still drop the
lock while you search for new space. When you get there, reacquire the
lock and check that the intervening space is still available.

That's really cheap for the common case. The difficulty is when you
have many parallel allocations hitting the same bg: they allocate
provisional windows, find the same free area later on in the bg, and
then stomp on each other as they try to move their windows there.

I wonder if there's not a simple solution for this --- mark the window
as "provisional", and if any other task tries to allocate in the space
immediately following such a window, it needs to block until that window
is released.

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-04-12 13:41    [W:0.358 / U:0.556 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site