lkml.org 
[lkml]   [2007]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [ext3][kernels >= 2.6.20.7 at least] KDE going comatose when FS is under heavy write load (massive starvation)
On Thu, 03 May 2007 21:38:10 +0400
Alex Tomas <alex@clusterfs.com> wrote:

> Andrew Morton wrote:
> > We can make great improvements here, and I've (twice) previously decribed
> > how: hoist the entire ordered-mode data handling out of ext3, and out of
> > the buffer_head layer and move it up into the VFS pagecache layer.
> > Basically, do ordered-data with a commit-time inode walk, calling
> > do_sync_mapping_range().
> >
> > Do it in the VFS. Make reiserfs use it, remove reiserfs ordered-mode too.
> > Make XFS use it, fix the hey-my-files-are-all-full-of-zeroes problem there.
>
> I'm not sure it's that easy.
>
> if we move to pages, then we have to mark pages to be flushed holding
> transaction open. now take delayed allocation into account: we need
> to allocate number of blocks at once and then mark all pages mapped,
> again within context of the same transaction.

Yes, there can be issues with needing to allocate journal space within the
context of a commit. But

a) If the page has newly allocated space on disk then the metadata which
refers to that page is already in the journal: no new journal space
needed.

b) If the page doesn't have space allocated on disk then we don't need
to write it out at ordered-mode commit time, because the post-recovery
filesystem will not have any references to that page.

c) If the page is dirty due to overwrite then no metadata update was required.

IOW, under what circumstances would an ordered-mode commit need to allocate
space for a delayed-allocate page?

However b) might lead to the hey-my-file-is-full-of-zeroes problem.

> so, an implementation
> would look like the following?
>
> generic_writepages() {
> /* collect set of contig. dirty pages */
> foo_get_blocks() {
> foo_journal_start();
> foo_new_blocks();
> foo_attach_blocks_to_inode();
> generic_mark_pages_mapped();
> foo_journal_stop();
> }
> }
>
> another question is will it scale well given number of dirty inodes
> can be much larger than number of inodes with dirty mapped blocks
> (in delayed allocation case, for example) ?

Possibly - zillions of dirty-for-atime inodes might get in the way. A
short-term fix would be to create a separate dirty-inode list on the
superblock (ug). A long-term fix is to rip all the per-superblock
dirty-inode lists and use a radix-tree. Not for lookup purposes, but for
the tree's ability to do tagged and restartable searches.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-05-04 01:57    [W:1.411 / U:0.956 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site