lkml.org 
[lkml]   [1999]   [Jun]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch] pagecache-2.3.9-H3, bmap & ext2fs cleanup patch

This is the second try for the bmap() cleanup, the attached patch is
against 2.3.9-pre4. The bmap() interface is now i hope properly cleaned
up, bmap() is in fact gone and was changed to be a more generic interface
to the lowlevel filesystem:

/*
* Generic block allocator exported by the lowlevel fs. All
* metadata details are handled by the lowlevel fs, all 'logical
* data content' details are handled by the highlevel block layer.
*/
int (*get_block) (struct inode *, long, struct buffer_head *, int);

this fits rather nicely into pre4. This cleanup got rid of a fair amount
of unnecessery argument passing between ext2fs and the page cache layer.
The change breaks block filesystems, only ext2fs, nfs, procfs and swapping
is tested - the rest should fail safely at compilation time. bmap()-less
filesystems are not affected.

i did some lowlevel ext2fs cleanups as well - more to come, but maybe
people want to comment on the direction i took.

about the get_block() change: i've changed/extended the fs <-> pagecache
interface relative to pre4, to do the following [more or less additional]
things too:

#define BH_Mapped 4 /* b_blocknr is a cached block mapping value */
#define BH_New 5 /* buffer got freshly allocated by the fs */
#define BH_Hole 6 /* buffer is an fs hole */

i think all of these 3 new flags represents a distinct property, and thus
we should not try too hard to merge them.

BH_Mapped is similar to what was BH_Allocated to BH_Mapped, but the
'BH_Allocated' name i think is confusing now, in the case of filesytem
holes. [just to clarify: it was not confusing with pre4's mechanizm, but
it became confusing in this patch] A BH_Mapped buffer now purely means
that a mapping has been established between the pagecache and the lowlevel
fs - it does not mean the block is allocated and physically present. A
hole is 'mapped', but not 'allocated'.

BH_New is straightforward: it shows wether a logical inode block was
freshly allocated by the fs. This can be and is used by the higher block
level to optimize security-clearing (or non-clearing) of blocks. [i'll
send patches for more optimizations later]

BH_Hole is straightforward as well. A filesystem does not have to support
holes.


<FUTURE STUFF>

I think fs-holes are a handy concept, and they could be pushed further
into the VM layer (by introducing PG_zero or PG_hole) so that mmap()-ed,
existing holes are mapped to the zero-page - am i reading the VM code
correctly that this does not happen currently? If accessing a hole,
filemap_nopage() currently does an unconditional readpage() (if page not
present in the pagecache). Holes could be cleared in a delayed way, via
the following semantics:

- if user-space or VM accesses it then do clear_user() or map-zero.
- if hole is written then just allocate the new block - which will be
cleared anyway.

this brings 3 distinct optimizations: 1) we can zero-map mmap()-ed holes.
2) we can clear_user() when a hole is read() 3) we can save clearing the
page altogether if it's overwritten later on, or we can merge it with the
write if a partial write happens. [this later optimization is 'in my
queue']

we can even do a 4th optimization: save memory by not allocating a full
page for a 'virtual hole': only a small, dynamic 'struct page' placeholder
is necessery so it can be found in the pagecache.

am i overlooking some conceptual difficulty, are such 'virtual holes'
doable? [i'll try to come up with some test-patch]

[two of those hole-related optimizations are in the domain of the 'block
layer', and thus they are possible today. The other optimizations are in
the domain of the pagecache and need a slight redesign.]

</FUTURE STUFF>

the patch also includes some minor buffer.c stuff - BH_Protected is
removed [will send a patch for rd.c]. I also included the
mark_buffer_uptodate optimization.

the only place that relies on the old fs-hole semantics and has ugly
block-IO code is swapping and brw_page(), my plan is to extend
swapper_inode_ops with it's own get_block() implementation - is this what
we want? With those planned changes, swapping will be more and more like a
special one-inode block-filesystem, used internally by the VM.

anyway, i've tested the patch carefully, it masters big files,
out-of-space and out-of-memory situations properly and does not seem to
corrupt files or metadata in stress-tests.

-- mingo
--- linux/fs/proc/array.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/array.c Sat Jun 26 15:29:37 1999
@@ -1519,7 +1519,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -1570,7 +1570,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/root.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/root.c Sat Jun 26 15:29:37 1999
@@ -71,7 +71,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -97,7 +97,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -142,7 +142,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -302,7 +302,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -490,7 +490,7 @@
NULL, /* rename */
proc_self_readlink, /* readlink */
proc_self_follow_link, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -513,7 +513,7 @@
NULL, /* rename */
proc_readlink, /* readlink */
proc_follow_link, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/mem.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/mem.c Sat Jun 26 15:29:37 1999
@@ -336,7 +336,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/base.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/base.c Sat Jun 26 15:29:37 1999
@@ -45,7 +45,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/fd.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/fd.c Sat Jun 26 15:29:37 1999
@@ -51,7 +51,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/generic.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/generic.c Sat Jun 26 15:29:37 1999
@@ -60,7 +60,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -86,7 +86,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/kmsg.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/kmsg.c Sat Jun 26 15:29:37 1999
@@ -72,7 +72,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/link.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/link.c Sat Jun 26 15:29:37 1999
@@ -49,7 +49,7 @@
NULL, /* rename */
proc_readlink, /* readlink */
proc_follow_link, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/net.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/net.c Sat Jun 26 15:29:38 1999
@@ -113,7 +113,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/omirr.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/omirr.c Sat Jun 26 15:29:38 1999
@@ -289,7 +289,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/openpromfs.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/openpromfs.c Sat Jun 26 15:29:38 1999
@@ -577,7 +577,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -614,7 +614,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -651,7 +651,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/proc_devtree.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/proc_devtree.c Sat Jun 26 15:29:38 1999
@@ -57,7 +57,7 @@
NULL, /* rename */
devtree_readlink, /* readlink */
devtree_follow_link, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/scsi.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/scsi.c Sat Jun 26 15:29:38 1999
@@ -71,7 +71,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/proc/sysvipc.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/proc/sysvipc.c Sat Jun 26 15:29:38 1999
@@ -130,7 +130,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/nfs/dir.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/nfs/dir.c Sat Jun 26 15:29:38 1999
@@ -78,7 +78,7 @@
nfs_rename, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/nfs/symlink.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/nfs/symlink.c Sat Jun 26 15:29:38 1999
@@ -43,7 +43,7 @@
NULL, /* rename */
nfs_readlink, /* readlink */
nfs_follow_link, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/nfs/file.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/nfs/file.c Sat Jun 26 15:29:38 1999
@@ -71,7 +71,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
nfs_readpage, /* readpage */
nfs_writepage, /* writepage */
NULL, /* flushpage */
@@ -167,7 +167,7 @@
* If the writer ends up delaying the write, the writer needs to
* increment the page use counts until he is done with the page.
*/
-static long nfs_write_one_page(struct file *file, struct page *page, unsigned long offset, unsigned long bytes, const char * buf)
+static int nfs_write_one_page(struct file *file, struct page *page, unsigned long offset, unsigned long bytes, const char * buf)
{
long status;

--- linux/fs/ext2/inode.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/ext2/inode.c Sat Jun 26 15:29:38 1999
@@ -129,23 +129,22 @@
return result;
}

-
-int ext2_bmap (struct inode * inode, int block)
+static inline long ext2_block_map (struct inode * inode, long block)
{
int i, ret;
- int addr_per_block = EXT2_ADDR_PER_BLOCK(inode->i_sb);
- int addr_per_block_bits = EXT2_ADDR_PER_BLOCK_BITS(inode->i_sb);
+ int ptrs = EXT2_ADDR_PER_BLOCK(inode->i_sb);
+ int ptrs_bits = EXT2_ADDR_PER_BLOCK_BITS(inode->i_sb);

ret = 0;
lock_kernel();
if (block < 0) {
- ext2_warning (inode->i_sb, "ext2_bmap", "block < 0");
+ ext2_warning (inode->i_sb, "ext2_block_map", "block < 0");
goto out;
}
- if (block >= EXT2_NDIR_BLOCKS + addr_per_block +
- (1 << (addr_per_block_bits * 2)) +
- ((1 << (addr_per_block_bits * 2)) << addr_per_block_bits)) {
- ext2_warning (inode->i_sb, "ext2_bmap", "block > big");
+ if (block >= EXT2_NDIR_BLOCKS + ptrs +
+ (1 << (ptrs_bits * 2)) +
+ ((1 << (ptrs_bits * 2)) << ptrs_bits)) {
+ ext2_warning (inode->i_sb, "ext2_block_map", "block > big");
goto out;
}
if (block < EXT2_NDIR_BLOCKS) {
@@ -153,7 +152,7 @@
goto out;
}
block -= EXT2_NDIR_BLOCKS;
- if (block < addr_per_block) {
+ if (block < ptrs) {
i = inode_bmap (inode, EXT2_IND_BLOCK);
if (!i)
goto out;
@@ -161,123 +160,64 @@
inode->i_sb->s_blocksize), block);
goto out;
}
- block -= addr_per_block;
- if (block < (1 << (addr_per_block_bits * 2))) {
+ block -= ptrs;
+ if (block < (1 << (ptrs_bits * 2))) {
i = inode_bmap (inode, EXT2_DIND_BLOCK);
if (!i)
goto out;
i = block_bmap (bread (inode->i_dev, i,
inode->i_sb->s_blocksize),
- block >> addr_per_block_bits);
+ block >> ptrs_bits);
if (!i)
goto out;
ret = block_bmap (bread (inode->i_dev, i,
inode->i_sb->s_blocksize),
- block & (addr_per_block - 1));
+ block & (ptrs - 1));
goto out;
}
- block -= (1 << (addr_per_block_bits * 2));
+ block -= (1 << (ptrs_bits * 2));
i = inode_bmap (inode, EXT2_TIND_BLOCK);
if (!i)
goto out;
i = block_bmap (bread (inode->i_dev, i, inode->i_sb->s_blocksize),
- block >> (addr_per_block_bits * 2));
+ block >> (ptrs_bits * 2));
if (!i)
goto out;
i = block_bmap (bread (inode->i_dev, i, inode->i_sb->s_blocksize),
- (block >> addr_per_block_bits) & (addr_per_block - 1));
+ (block >> ptrs_bits) & (ptrs - 1));
if (!i)
goto out;
ret = block_bmap (bread (inode->i_dev, i, inode->i_sb->s_blocksize),
- block & (addr_per_block - 1));
+ block & (ptrs - 1));
out:
unlock_kernel();
return ret;
}

-int ext2_bmap_create (struct inode * inode, int block)
-{
- int i;
- int addr_per_block = EXT2_ADDR_PER_BLOCK(inode->i_sb);
- int addr_per_block_bits = EXT2_ADDR_PER_BLOCK_BITS(inode->i_sb);
-
- if (block < 0) {
- ext2_warning (inode->i_sb, "ext2_bmap", "block < 0");
- return 0;
- }
- if (block >= EXT2_NDIR_BLOCKS + addr_per_block +
- (1 << (addr_per_block_bits * 2)) +
- ((1 << (addr_per_block_bits * 2)) << addr_per_block_bits)) {
- ext2_warning (inode->i_sb, "ext2_bmap", "block > big");
- return 0;
- }
- if (block < EXT2_NDIR_BLOCKS)
- return inode_bmap (inode, block);
- block -= EXT2_NDIR_BLOCKS;
- if (block < addr_per_block) {
- i = inode_bmap (inode, EXT2_IND_BLOCK);
- if (!i)
- return 0;
- return block_bmap (bread (inode->i_dev, i,
- inode->i_sb->s_blocksize), block);
- }
- block -= addr_per_block;
- if (block < (1 << (addr_per_block_bits * 2))) {
- i = inode_bmap (inode, EXT2_DIND_BLOCK);
- if (!i)
- return 0;
- i = block_bmap (bread (inode->i_dev, i,
- inode->i_sb->s_blocksize),
- block >> addr_per_block_bits);
- if (!i)
- return 0;
- return block_bmap (bread (inode->i_dev, i,
- inode->i_sb->s_blocksize),
- block & (addr_per_block - 1));
- }
- block -= (1 << (addr_per_block_bits * 2));
- i = inode_bmap (inode, EXT2_TIND_BLOCK);
- if (!i)
- return 0;
- i = block_bmap (bread (inode->i_dev, i, inode->i_sb->s_blocksize),
- block >> (addr_per_block_bits * 2));
- if (!i)
- return 0;
- i = block_bmap (bread (inode->i_dev, i, inode->i_sb->s_blocksize),
- (block >> addr_per_block_bits) & (addr_per_block - 1));
- if (!i)
- return 0;
- return block_bmap (bread (inode->i_dev, i, inode->i_sb->s_blocksize),
- block & (addr_per_block - 1));
-}
-
static struct buffer_head * inode_getblk (struct inode * inode, int nr,
- int create, int new_block, int * err, int metadata,
- int *phys_block, int *created)
+ int new_block, int * err, int metadata, long *phys, int *new)
{
u32 * p;
int tmp, goal = 0;
struct buffer_head * result;
- int blocks = inode->i_sb->s_blocksize / 512;
+ int blocksize = inode->i_sb->s_blocksize;

p = inode->u.ext2_i.i_data + nr;
repeat:
tmp = le32_to_cpu(*p);
if (tmp) {
if (metadata) {
- struct buffer_head * result = getblk (inode->i_dev, tmp, inode->i_sb->s_blocksize);
+ result = getblk (inode->i_dev, tmp, blocksize);
if (tmp == le32_to_cpu(*p))
return result;
brelse (result);
goto repeat;
} else {
- *phys_block = tmp;
+ *phys = tmp;
return NULL;
}
}
*err = -EFBIG;
- if (!create)
- goto dont_create;

/* Check file limits.. */
{
@@ -286,7 +226,6 @@
limit >>= EXT2_BLOCK_SIZE_BITS(inode->i_sb);
if (new_block >= limit) {
send_sig(SIGXFSZ, current, 0);
-dont_create:
*err = -EFBIG;
return NULL;
}
@@ -314,34 +253,41 @@
ext2_debug ("goal = %d.\n", goal);

tmp = ext2_alloc_block (inode, goal, err);
- if (!tmp)
+ if (!tmp) {
+ *err = -ENOSPC;
return NULL;
+ }
if (metadata) {
- result = getblk (inode->i_dev, tmp, inode->i_sb->s_blocksize);
+ result = getblk (inode->i_dev, tmp, blocksize);
if (*p) {
ext2_free_blocks (inode, tmp, 1);
brelse (result);
goto repeat;
}
- memset(result->b_data, 0, inode->i_sb->s_blocksize);
+ memset(result->b_data, 0, blocksize);
mark_buffer_uptodate(result, 1);
mark_buffer_dirty(result, 1);
} else {
if (*p) {
+ /*
+ * Nobody is allowed to change block allocation
+ * state from under us:
+ */
+ BUG();
ext2_free_blocks (inode, tmp, 1);
goto repeat;
}
- *phys_block = tmp;
+ *phys = tmp;
result = NULL;
*err = 0;
- *created = 1;
+ *new = 1;
}
*p = cpu_to_le32(tmp);

inode->u.ext2_i.i_next_alloc_block = new_block;
inode->u.ext2_i.i_next_alloc_goal = tmp;
inode->i_ctime = CURRENT_TIME;
- inode->i_blocks += blocks;
+ inode->i_blocks += blocksize/512;
if (IS_SYNC(inode) || inode->u.ext2_i.i_osync)
ext2_sync_inode (inode);
else
@@ -358,24 +304,23 @@
* NULL return in the data case is mandatory.
*/
static struct buffer_head * block_getblk (struct inode * inode,
- struct buffer_head * bh, int nr, int create, int blocksize,
- int new_block, int * err, int metadata, int *phys_block, int *created)
+ struct buffer_head * bh, int nr,
+ int new_block, int * err, int metadata, long *phys, int *new)
{
int tmp, goal = 0;
u32 * p;
struct buffer_head * result;
- int blocks = inode->i_sb->s_blocksize / 512;
+ int blocksize = inode->i_sb->s_blocksize;
unsigned long limit;
-
+
+ result = NULL;
if (!bh)
- return NULL;
+ goto out;
if (!buffer_uptodate(bh)) {
ll_rw_block (READ, 1, &bh);
wait_on_buffer (bh);
- if (!buffer_uptodate(bh)) {
- brelse (bh);
- return NULL;
- }
+ if (!buffer_uptodate(bh))
+ goto out;
}
p = (u32 *) bh->b_data + nr;
repeat:
@@ -383,31 +328,24 @@
if (tmp) {
if (metadata) {
result = getblk (bh->b_dev, tmp, blocksize);
- if (tmp == le32_to_cpu(*p)) {
- brelse (bh);
- return result;
- }
+ if (tmp == le32_to_cpu(*p))
+ goto out;
brelse (result);
goto repeat;
} else {
- *phys_block = tmp;
- brelse (bh);
- return NULL;
+ *phys = tmp;
+ /* result == NULL */
+ goto out;
}
}
*err = -EFBIG;
- if (!create) {
- brelse (bh);
- return NULL;
- }

limit = current->rlim[RLIMIT_FSIZE].rlim_cur;
if (limit < RLIM_INFINITY) {
limit >>= EXT2_BLOCK_SIZE_BITS(inode->i_sb);
if (new_block >= limit) {
- brelse (bh);
send_sig(SIGXFSZ, current, 0);
- return NULL;
+ goto out;
}
}

@@ -424,10 +362,8 @@
goal = bh->b_blocknr;
}
tmp = ext2_alloc_block (inode, goal, err);
- if (!tmp) {
- brelse (bh);
- return NULL;
- }
+ if (!tmp)
+ goto out;
if (metadata) {
result = getblk (bh->b_dev, tmp, blocksize);
if (*p) {
@@ -439,10 +375,8 @@
mark_buffer_uptodate(result, 1);
mark_buffer_dirty(result, 1);
} else {
- *phys_block = tmp;
- result = NULL;
- *err = 0;
- *created = 1;
+ *phys = tmp;
+ *new = 1;
}
if (*p) {
ext2_free_blocks (inode, tmp, 1);
@@ -456,116 +390,168 @@
wait_on_buffer (bh);
}
inode->i_ctime = CURRENT_TIME;
- inode->i_blocks += blocks;
+ inode->i_blocks += blocksize/512;
mark_inode_dirty(inode);
inode->u.ext2_i.i_next_alloc_block = new_block;
inode->u.ext2_i.i_next_alloc_goal = tmp;
+ *err = 0;
+out:
brelse (bh);
return result;
}

-int ext2_getblk_block (struct inode * inode, long block,
- int create, int * err, int * created)
+int ext2_get_block (struct inode *inode, long iblock,
+ struct buffer_head *bh_result, int flag)
{
- struct buffer_head * bh, *tmp;
- unsigned long b;
- unsigned long addr_per_block = EXT2_ADDR_PER_BLOCK(inode->i_sb);
- int addr_per_block_bits = EXT2_ADDR_PER_BLOCK_BITS(inode->i_sb);
- int phys_block, ret;
+ int ret, err, new;
+ struct buffer_head *bh;
+ unsigned long ptr, phys;
+ /*
+ * block pointers per block
+ */
+ unsigned long ptrs = EXT2_ADDR_PER_BLOCK(inode->i_sb);
+ int ptrs_bits = EXT2_ADDR_PER_BLOCK_BITS(inode->i_sb);
+ const int direct_blocks = EXT2_NDIR_BLOCKS,
+ indirect_blocks = ptrs,
+ double_blocks = (1 << (ptrs_bits * 2)),
+ triple_blocks = (1 << (ptrs_bits * 3));
+ if (flag == FS_GETBLOCK_MAP) {
+ /*
+ * Will clean this up further, ext2_block_map() should use the
+ * bh instead of an integer block-number interface.
+ */
+ phys = ext2_block_map(inode, iblock);
+ bh_result->b_state |= (1UL << BH_Mapped);
+ if (phys) {
+ bh_result->b_blocknr = phys;
+ } else {
+ bh_result->b_blocknr = -1000; // B safe
+ bh_result->b_state |= (1UL << BH_Hole);
+ }
+ return 0;
+ }

- lock_kernel();
+ err = -EIO;
+ new = 0;
ret = 0;
- *err = -EIO;
- if (block < 0) {
- ext2_warning (inode->i_sb, "ext2_getblk", "block < 0");
- goto abort;
- }
- if (block > EXT2_NDIR_BLOCKS + addr_per_block +
- (1 << (addr_per_block_bits * 2)) +
- ((1 << (addr_per_block_bits * 2)) << addr_per_block_bits)) {
- ext2_warning (inode->i_sb, "ext2_getblk", "block > big");
- goto abort;
- }
+ bh = NULL;
+
+ lock_kernel();
+
+ if (iblock < 0)
+ goto abort_negative;
+ if (iblock > direct_blocks + indirect_blocks +
+ double_blocks + triple_blocks)
+ goto abort_too_big;
+
/*
* If this is a sequential block allocation, set the next_alloc_block
* to this block now so that all the indblock and data block
* allocations use the same goal zone
*/

- ext2_debug ("block %lu, next %lu, goal %lu.\n", block,
+ ext2_debug ("block %lu, next %lu, goal %lu.\n", iblock,
inode->u.ext2_i.i_next_alloc_block,
inode->u.ext2_i.i_next_alloc_goal);

- if (block == inode->u.ext2_i.i_next_alloc_block + 1) {
+ if (iblock == inode->u.ext2_i.i_next_alloc_block + 1) {
inode->u.ext2_i.i_next_alloc_block++;
inode->u.ext2_i.i_next_alloc_goal++;
}

- *err = 0;
- b = block;
- *created = 0;
- if (block < EXT2_NDIR_BLOCKS) {
- /*
- * data page.
- */
- tmp = inode_getblk (inode, block, create, b,
- err, 0, &phys_block, created);
- goto out;
- }
- block -= EXT2_NDIR_BLOCKS;
- if (block < addr_per_block) {
- bh = inode_getblk (inode, EXT2_IND_BLOCK, create, b, err, 1, NULL, NULL);
- tmp = block_getblk (inode, bh, block, create,
- inode->i_sb->s_blocksize, b, err, 0, &phys_block, created);
- goto out;
- }
- block -= addr_per_block;
- if (block < (1 << (addr_per_block_bits * 2))) {
- bh = inode_getblk (inode, EXT2_DIND_BLOCK, create, b, err, 1, NULL, NULL);
- bh = block_getblk (inode, bh, block >> addr_per_block_bits,
- create, inode->i_sb->s_blocksize, b, err, 1, NULL, NULL);
- tmp = block_getblk (inode, bh, block & (addr_per_block - 1),
- create, inode->i_sb->s_blocksize, b, err, 0, &phys_block, created);
- goto out;
- }
- block -= (1 << (addr_per_block_bits * 2));
- bh = inode_getblk (inode, EXT2_TIND_BLOCK, create, b, err, 1, NULL,NULL);
- bh = block_getblk (inode, bh, block >> (addr_per_block_bits * 2),
- create, inode->i_sb->s_blocksize, b, err, 1, NULL,NULL);
- bh = block_getblk (inode, bh, (block >> addr_per_block_bits) &
- (addr_per_block - 1), create, inode->i_sb->s_blocksize,
- b, err, 1, NULL,NULL);
- tmp = block_getblk (inode, bh, block & (addr_per_block - 1), create,
- inode->i_sb->s_blocksize, b, err, 0, &phys_block, created);
+ err = 0;
+ ptr = iblock;
+
+ /*
+ * ok, these macros clean the logic up a bit and make
+ * it much more readable:
+ */
+#define GET_INODE_DATABLOCK(x) \
+ inode_getblk(inode, x, iblock, &err, 0, &phys, &new)
+#define GET_INODE_PTR(x) \
+ inode_getblk(inode, x, iblock, &err, 1, NULL, NULL)
+#define GET_INDIRECT_DATABLOCK(x) \
+ block_getblk (inode, bh, x, iblock, &err, 0, &phys, &new);
+#define GET_INDIRECT_PTR(x) \
+ block_getblk (inode, bh, x, iblock, &err, 1, NULL, NULL);
+
+ if (ptr < direct_blocks) {
+ bh = GET_INODE_DATABLOCK(ptr);
+ goto out;
+ }
+ ptr -= direct_blocks;
+ if (ptr < indirect_blocks) {
+ bh = GET_INODE_PTR(EXT2_IND_BLOCK);
+ goto get_indirect;
+ }
+ ptr -= indirect_blocks;
+ if (ptr < double_blocks) {
+ bh = GET_INODE_PTR(EXT2_DIND_BLOCK);
+ goto get_double;
+ }
+ ptr -= double_blocks;
+ bh = GET_INODE_PTR(EXT2_TIND_BLOCK);
+ bh = GET_INDIRECT_PTR(ptr >> (ptrs_bits * 2));
+get_double:
+ bh = GET_INDIRECT_PTR((ptr >> ptrs_bits) & (ptrs - 1));
+get_indirect:
+ bh = GET_INDIRECT_DATABLOCK(ptr & (ptrs - 1));
+
+#undef GET_INODE_DATABLOCK
+#undef GET_INODE_PTR
+#undef GET_INDIRECT_DATABLOCK
+#undef GET_INDIRECT_PTR

out:
- if (!phys_block)
- goto abort;
- if (*err)
+ if (bh)
+ BUG(); // temporary debugging check
+ if (err)
goto abort;
- ret = phys_block;
+ if (!phys)
+ BUG(); // must not happen either
+
+ bh_result->b_blocknr = phys;
+ bh_result->b_state |= (1UL << BH_Mapped); /* safe */
+ if (new)
+ bh_result->b_state |= (1UL << BH_New);
abort:
unlock_kernel();
- return ret;
+ return err;
+
+abort_negative:
+ ext2_warning (inode->i_sb, "ext2_get_block", "block < 0");
+ goto abort;
+
+abort_too_big:
+ ext2_warning (inode->i_sb, "ext2_get_block", "block > big");
+ goto abort;
}

struct buffer_head * ext2_getblk (struct inode * inode, long block,
int create, int * err)
{
- struct buffer_head *tmp = NULL;
- int phys_block;
- int created;
-
- phys_block = ext2_getblk_block (inode, block, create, err, &created);
-
- if (phys_block) {
- tmp = getblk (inode->i_dev, phys_block, inode->i_sb->s_blocksize);
- if (created) {
+ struct buffer_head *tmp = NULL, dummy;
+
+ dummy.b_state = 0;
+ dummy.b_blocknr = -1000; // catches bugs
+ if (create)
+ *err = ext2_get_block (inode, block, &dummy, FS_GETBLOCK_NEW);
+ else
+ *err = ext2_get_block (inode, block, &dummy, FS_GETBLOCK_MAP);
+
+ if (*err)
+ goto out;
+
+ if (buffer_mapped(&dummy) && !buffer_hole(&dummy)) {
+ tmp = getblk (inode->i_dev, dummy.b_blocknr,
+ inode->i_sb->s_blocksize);
+ if (buffer_new(&dummy)) {
memset(tmp->b_data, 0, inode->i_sb->s_blocksize);
mark_buffer_uptodate(tmp, 1);
mark_buffer_dirty(tmp, 1);
}
}
+out:
return tmp;
}

--- linux/fs/ext2/file.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/ext2/file.c Sat Jun 26 15:29:38 1999
@@ -37,7 +37,6 @@
#define MIN(a,b) (((a)<(b))?(a):(b))
#define MAX(a,b) (((a)>(b))?(a):(b))

-static int ext2_writepage (struct file * file, struct page * page);
static long long ext2_file_lseek(struct file *, long long, int);
#if BITS_PER_LONG < 64
static int ext2_open_file (struct inode *, struct file *);
@@ -106,51 +105,16 @@
}
}

-static int ext2_get_block(struct inode *inode, unsigned long block, struct buffer_head *bh, unsigned int flags)
-{
- int error, created;
- unsigned long blocknr;
-
- blocknr = ext2_getblk_block(inode, block, flags & FS_GETBLK_ALLOCATE, &error, &created);
- if (!blocknr) {
- if (error)
- return error;
- if (!(flags & FS_GETBLK_ALLOCATE))
- goto clear_and_uptodate;
- return -ENOSPC;
- }
-
- bh->b_dev = inode->i_dev;
- bh->b_blocknr = blocknr;
- set_bit(BH_Allocated, &bh->b_state);
-
- if (created) {
-clear_and_uptodate:
- if (flags & FS_GETBLK_UPDATE) {
- memset(bh->b_data, 0, bh->b_size);
- set_bit(BH_Uptodate, &bh->b_state);
- }
- }
- return 0;
-}
-
-static int ext2_writepage (struct file * file, struct page * page)
-{
- return block_write_full_page(file, page, ext2_get_block);
-}
-
-static long ext2_write_one_page (struct file *file, struct page *page, unsigned long offset, unsigned long bytes, const char * buf)
-{
- return block_write_partial_page(file, page, offset, bytes, buf, ext2_get_block);
-}
-
/*
* Write to a file (through the page cache).
*/
static ssize_t
ext2_file_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
{
- ssize_t retval = generic_file_write(file, buf, count, ppos, ext2_write_one_page);
+ ssize_t retval;
+
+ retval = generic_file_write(file, buf, count,
+ ppos, block_write_partial_page);
if (retval > 0) {
struct inode *inode = file->f_dentry->d_inode;
remove_suid(inode);
@@ -223,9 +187,9 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- ext2_bmap, /* bmap */
+ ext2_get_block, /* get_block */
block_read_full_page, /* readpage */
- ext2_writepage, /* writepage */
+ block_write_full_page, /* writepage */
block_flushpage, /* flushpage */
ext2_truncate, /* truncate */
ext2_permission, /* permission */
--- linux/fs/ext2/dir.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/ext2/dir.c Sat Jun 26 15:29:38 1999
@@ -67,7 +67,7 @@
ext2_rename, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/ext2/symlink.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/ext2/symlink.c Sat Jun 26 15:29:38 1999
@@ -43,7 +43,7 @@
NULL, /* rename */
ext2_readlink, /* readlink */
ext2_follow_link, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/exec.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/exec.c Sat Jun 26 15:29:38 1999
@@ -320,7 +320,7 @@
/*
* Read in the complete executable. This is used for "-N" files
* that aren't on a block boundary, and for files on filesystems
- * without bmap support.
+ * without get_block support.
*/
int read_exec(struct dentry *dentry, unsigned long offset,
char * addr, unsigned long count, int to_kmem)
--- linux/fs/buffer.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/buffer.c Sat Jun 26 15:29:38 1999
@@ -410,7 +410,6 @@
if (bh->b_count)
continue;
bh->b_flushtime = 0;
- clear_bit(BH_Protected, &bh->b_state);
clear_bit(BH_Uptodate, &bh->b_state);
clear_bit(BH_Dirty, &bh->b_state);
clear_bit(BH_Req, &bh->b_state);
@@ -676,6 +675,7 @@
bh->b_flushtime = 0;
bh->b_end_io = handler;
bh->b_dev_id = dev_id;
+ bh->b_count = 1;
}

static void end_buffer_io_sync(struct buffer_head *bh, int uptodate)
@@ -699,32 +699,28 @@
struct page *page;
int free;

- mark_buffer_uptodate(bh, uptodate);
-
- /* This is a temporary buffer used for page I/O. */
page = mem_map + MAP_NR(bh->b_data);
-
+ mark_buffer_uptodate(bh, uptodate);
if (!uptodate)
SetPageError(page);

+ /* This is a temporary buffer used for page I/O. */
+
/*
* Be _very_ careful from here on. Bad things can happen if
* two buffer heads end IO at almost the same time and both
* decide that the page is now completely done.
*
- * Async buffer_heads are here only as labels for IO, and get
- * thrown away once the IO for this page is complete. IO is
- * deemed complete once all buffers have been visited
- * (b_count==0) and are now unlocked. We must make sure that
- * only the _last_ buffer that decrements its count is the one
- * that free's the page..
+ * IO here is deemed complete once all asynchron buffers have
+ * gone unused. (note that an asynchron buffer can go synchron
+ * by getting dirtied)
*/
spin_lock_irqsave(&page_uptodate_lock, flags);
unlock_buffer(bh);
bh->b_count--;
tmp = bh->b_this_page;
while (tmp != bh) {
- if (tmp->b_count && (tmp->b_end_io == end_buffer_io_async))
+ if ((tmp->b_end_io == end_buffer_io_async) && tmp->b_count)
goto still_busy;
tmp = tmp->b_this_page;
}
@@ -806,8 +802,7 @@
init_buffer(bh, end_buffer_io_sync, NULL);
bh->b_dev = dev;
bh->b_blocknr = block;
- bh->b_count = 1;
- bh->b_state = 1 << BH_Allocated;
+ bh->b_state = 1 << BH_Mapped;

/* Insert the buffer into the regular lists */
insert_into_lru_list(bh);
@@ -917,6 +912,7 @@
return;
}
printk("VFS: brelse: Trying to free free buffer\n");
+ BUG();
}

/*
@@ -1221,9 +1217,8 @@
if (bmap && !block) {
memset(bh->b_data, 0, size);
set_bit(BH_Uptodate, &bh->b_state);
- continue;
}
- set_bit(BH_Allocated, &bh->b_state);
+ set_bit(BH_Mapped, &bh->b_state);
}
tail->b_this_page = head;
get_page(page);
@@ -1259,14 +1254,17 @@
* is this block fully flushed?
*/
if (offset <= curr_off) {
- if (buffer_allocated(bh)) {
+ /*
+ * does it have any association with the block device?
+ */
+ if (buffer_mapped(bh)) {
bh->b_count++;
wait_on_buffer(bh);
if (bh->b_dev == B_FREE)
BUG();
mark_buffer_clean(bh);
clear_bit(BH_Uptodate, &bh->b_state);
- clear_bit(BH_Allocated, &bh->b_state);
+ clear_bit(BH_Mapped, &bh->b_state);
bh->b_blocknr = 0;
bh->b_count--;
}
@@ -1307,7 +1305,8 @@
bh = head;
do {
bh->b_dev = inode->i_dev;
- bh->b_blocknr = 0;
+ bh->b_state = 0;
+ bh->b_blocknr = -1000;
bh->b_end_io = end_buffer_io_bad;
tail = bh;
bh = bh->b_this_page;
@@ -1321,7 +1320,7 @@
* block_write_full_page() is SMP-safe - currently it's still
* being called with the kernel lock held, but the code is ready.
*/
-int block_write_full_page (struct file *file, struct page *page, fs_getblock_t fs_get_block)
+int block_write_full_page (struct file *file, struct page *page)
{
struct dentry *dentry = file->f_dentry;
struct inode *inode = dentry->d_inode;
@@ -1358,12 +1357,14 @@
* decisions (block #0 may actually be a valid block)
*/
bh->b_end_io = end_buffer_io_sync;
- if (!buffer_allocated(bh)) {
- err = fs_get_block(inode, block, bh, FS_GETBLK_ALLOCATE);
+ if (!buffer_uptodate(bh) || buffer_hole(bh)) {
+ err = inode->i_op->get_block(inode, block,
+ bh, FS_GETBLOCK_NEW);
if (err)
goto out;
+ bh->b_dev = inode->i_dev;
}
- set_bit(BH_Uptodate, &bh->b_state);
+ bh->b_state |= (1UL << BH_Uptodate); /* safe */
atomic_mark_buffer_dirty(bh,0);

bh = bh->b_this_page;
@@ -1377,7 +1378,7 @@
return err;
}

-int block_write_partial_page (struct file *file, struct page *page, unsigned long offset, unsigned long bytes, const char * buf, fs_getblock_t fs_get_block)
+int block_write_partial_page (struct file *file, struct page *page, unsigned long offset, unsigned long bytes, const char * buf)
{
struct dentry *dentry = file->f_dentry;
struct inode *inode = dentry->d_inode;
@@ -1447,26 +1448,35 @@
* not going to fill it completely.
*/
bh->b_end_io = end_buffer_io_sync;
- if (!buffer_allocated(bh)) {
- unsigned int flags = FS_GETBLK_ALLOCATE;
- if (start_offset || (end_bytes && (i == end_block)))
- flags |= FS_GETBLK_UPDATE;
- err = fs_get_block(inode, block, bh, flags);
+
+ if (!buffer_uptodate(bh) || buffer_hole(bh)) {
+ err = inode->i_op->get_block(inode, block,
+ bh, FS_GETBLOCK_NEW);
if (err)
goto out;
- }
-
- if (start_offset || (end_bytes && (i == end_block))) {
- if (!buffer_uptodate(bh)) {
- lock_kernel();
- ll_rw_block(READ, 1, &bh);
- wait_on_buffer(bh);
- unlock_kernel();
- err = -EIO;
- if (!buffer_uptodate(bh))
- goto out;
+ bh->b_dev = inode->i_dev;
+ /*
+ * if partially written block which has contents on
+ * disk, then we have to read it first.
+ * We also rely on the fact that filesystem holes
+ * cannot be written.
+ */
+ if (start_offset || (end_bytes && (i == end_block))) {
+ if (buffer_new(bh)) {
+ memset(bh->b_data, 0, bh->b_size);
+ } else {
+ ll_rw_block(READ, 1, &bh);
+ lock_kernel();
+ wait_on_buffer(bh);
+ unlock_kernel();
+ err = -EIO;
+ if (!buffer_uptodate(bh))
+ goto out;
+ }
}
+ bh->b_state &= ~(1UL << BH_New); // not strictly needed
}
+ bh->b_state |= (1UL << BH_Uptodate); /* is SMP safe */

err = -EFAULT;
len = blocksize;
@@ -1535,6 +1545,9 @@
*
* brw_page() is SMP-safe, although it's being called with the
* kernel lock held - but the code is ready.
+ *
+ * FIXME: we need a swapper_inode->get_block function to remove
+ * some of the bmap kludges and interface ugliness here.
*/
int brw_page(int rw, struct page *page, kdev_t dev, int b[], int size, int bmap)
{
@@ -1563,7 +1576,7 @@
do {
block = *(b++);

- if (fresh && (bh->b_count != 0))
+ if (fresh && (bh->b_count != 1))
BUG();
if (rw == READ) {
if (!fresh)
@@ -1574,10 +1587,8 @@
} else {
if (bmap && !block)
BUG();
- if (!buffer_uptodate(bh)) {
+ if (!buffer_uptodate(bh))
arr[nr++] = bh;
- bh->b_count++;
- }
}
} else { /* WRITE */
if (!bh->b_blocknr) {
@@ -1591,7 +1602,6 @@
set_bit(BH_Uptodate, &bh->b_state);
set_bit(BH_Dirty, &bh->b_state);
arr[nr++] = bh;
- bh->b_count++;
}
bh = bh->b_this_page;
} while (bh != head);
@@ -1614,30 +1624,7 @@
}

/*
- * This is called by end_request() when I/O has completed.
- */
-void mark_buffer_uptodate(struct buffer_head * bh, int on)
-{
- if (on) {
- struct buffer_head *tmp = bh;
- struct page *page;
- set_bit(BH_Uptodate, &bh->b_state);
- /* If a page has buffers and all these buffers are uptodate,
- * then the page is uptodate. */
- do {
- if (!test_bit(BH_Uptodate, &tmp->b_state))
- return;
- tmp=tmp->b_this_page;
- } while (tmp && tmp != bh);
- page = mem_map + MAP_NR(bh->b_data);
- SetPageUptodate(page);
- return;
- }
- clear_bit(BH_Uptodate, &bh->b_state);
-}
-
-/*
- * Generic "readpage" function for block devices that have the normal
+ * Generic "read page" function for block devices that have the normal
* bmap functionality. This is most of the block device filesystems.
* Reads the page asynchronously --- the unlock_buffer() and
* mark_buffer_uptodate() functions propagate buffer state into the
@@ -1647,7 +1634,7 @@
{
struct dentry *dentry = file->f_dentry;
struct inode *inode = dentry->d_inode;
- unsigned long iblock, phys_block;
+ unsigned long iblock;
struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
unsigned int blocksize, blocks;
int nr;
@@ -1666,36 +1653,28 @@
bh = head;
nr = 0;
do {
- phys_block = bh->b_blocknr;
/*
* important, we have to retry buffers that already have
- * their bnr cached but had an IO error!
+ * their bnr cached but had an IO error.
*/
if (!buffer_uptodate(bh)) {
- unsigned long phys_block = bh->b_blocknr;
- if (!buffer_allocated(bh)) {
- phys_block = inode->i_op->bmap(inode, iblock);
- if (phys_block) {
- bh->b_dev = inode->i_dev;
- bh->b_blocknr = phys_block;
- set_bit(BH_Allocated, &bh->b_state);
- }
- }
-
+ inode->i_op->get_block(inode,iblock,bh,FS_GETBLOCK_MAP);
+ if (!buffer_mapped(bh))
+ BUG(); // temporary debugging check
/*
* this is safe to do because we hold the page lock:
*/
- if (phys_block) {
- init_buffer(bh, end_buffer_io_async, NULL);
- arr[nr] = bh;
- bh->b_count++;
- nr++;
- } else {
+ if (buffer_hole(bh)) {
/*
* filesystem 'hole' represents zero-contents.
*/
+ bh->b_state |= (1UL << BH_Uptodate); /* safe */
memset(bh->b_data, 0, blocksize);
- set_bit(BH_Uptodate, &bh->b_state);
+ } else {
+ init_buffer(bh, end_buffer_io_async, NULL);
+ bh->b_dev = inode->i_dev;
+ arr[nr] = bh;
+ nr++;
}
}
iblock++;
@@ -1773,8 +1752,8 @@
/*
* Can the buffer be thrown out?
*/
-#define BUFFER_BUSY_BITS ((1<<BH_Dirty) | (1<<BH_Lock) | (1<<BH_Protected))
-#define buffer_busy(bh) ((bh)->b_count | ((bh)->b_state & BUFFER_BUSY_BITS))
+#define BUFFER_BUSY_BITS ((1<<BH_Dirty) | (1<<BH_Lock))
+#define buffer_busy(bh) ((bh)->b_count | ((bh)->b_state & BUFFER_BUSY_BITS))

/*
* try_to_free_buffers() checks if all the buffers on this particular page
@@ -1819,7 +1798,7 @@
return 1;

busy_buffer_page:
- /* Uhhuh, star writeback so that we don't end up with all dirty pages */
+ /* Uhhuh, start writeback so that we don't end up with all dirty pages */
too_many_dirty_buffers = 1;
wakeup_bdflush(0);
return 0;
@@ -1831,7 +1810,6 @@
{
struct buffer_head * bh;
int found = 0, locked = 0, dirty = 0, used = 0, lastused = 0;
- int protected = 0;
int nlist;
static char *buf_types[NR_LIST] = {"CLEAN","LOCKED","DIRTY"};

@@ -1841,7 +1819,7 @@
printk("Buffer hashed: %6d\n",nr_hashed_buffers);

for(nlist = 0; nlist < NR_LIST; nlist++) {
- found = locked = dirty = used = lastused = protected = 0;
+ found = locked = dirty = used = lastused = 0;
bh = lru_list[nlist];
if(!bh) continue;

@@ -1849,8 +1827,6 @@
found++;
if (buffer_locked(bh))
locked++;
- if (buffer_protected(bh))
- protected++;
if (buffer_dirty(bh))
dirty++;
if (bh->b_count)
@@ -1858,9 +1834,9 @@
bh = bh->b_next_free;
} while (bh != lru_list[nlist]);
printk("%8s: %d buffers, %d used (last=%d), "
- "%d locked, %d protected, %d dirty\n",
+ "%d locked, %d dirty\n",
buf_types[nlist], found, used, lastused,
- locked, protected, dirty);
+ locked, dirty);
};
}

--- linux/fs/inode.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/inode.c Sat Jun 26 15:29:38 1999
@@ -778,8 +778,14 @@

int bmap(struct inode * inode, int block)
{
- if (inode->i_op && inode->i_op->bmap)
- return inode->i_op->bmap(inode, block);
+ struct buffer_head tmp;
+
+ if (inode->i_op && inode->i_op->get_block) {
+ tmp.b_state = 0;
+ tmp.b_blocknr = 0;
+ inode->i_op->get_block(inode, block, &tmp, FS_GETBLOCK_MAP);
+ return tmp.b_blocknr;
+ }
return 0;
}

--- linux/fs/bad_inode.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/bad_inode.c Sat Jun 26 15:29:38 1999
@@ -60,7 +60,7 @@
EIO_ERROR, /* rename */
EIO_ERROR, /* readlink */
bad_follow_link, /* follow_link */
- EIO_ERROR, /* bmap */
+ EIO_ERROR, /* get_block */
EIO_ERROR, /* readpage */
EIO_ERROR, /* writepage */
EIO_ERROR, /* flushpage */
--- linux/fs/devices.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/devices.c Sat Jun 26 15:29:38 1999
@@ -277,7 +277,7 @@
NULL, /* mknod */
NULL, /* rename */
NULL, /* readlink */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
@@ -333,7 +333,7 @@
NULL, /* mknod */
NULL, /* rename */
NULL, /* readlink */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/fifo.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/fifo.c Sat Jun 26 15:29:38 1999
@@ -179,7 +179,7 @@
NULL, /* mknod */
NULL, /* rename */
NULL, /* readlink */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/pipe.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/pipe.c Sat Jun 26 15:29:38 1999
@@ -461,7 +461,7 @@
NULL, /* mknod */
NULL, /* rename */
NULL, /* readlink */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/fs/binfmt_aout.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/binfmt_aout.c Sat Jun 26 15:29:38 1999
@@ -323,7 +323,7 @@

if (N_MAGIC(ex) == ZMAGIC && ex.a_text &&
bprm->dentry->d_inode->i_op &&
- bprm->dentry->d_inode->i_op->bmap &&
+ bprm->dentry->d_inode->i_op->get_block &&
(fd_offset < bprm->dentry->d_inode->i_sb->s_blocksize)) {
printk(KERN_NOTICE "N_TXTOFF < BLOCK_SIZE. Please convert binary.\n");
return -ENOEXEC;
--- linux/fs/ioctl.c.orig Sat Jun 26 14:54:30 1999
+++ linux/fs/ioctl.c Sat Jun 26 15:29:38 1999
@@ -18,14 +18,22 @@

switch (cmd) {
case FIBMAP:
+ {
+ struct buffer_head tmp;
+
if (inode->i_op == NULL)
return -EBADF;
- if (inode->i_op->bmap == NULL)
+ if (inode->i_op->get_block == NULL)
return -EINVAL;
if ((error = get_user(block, (int *) arg)) != 0)
return error;
- block = inode->i_op->bmap(inode,block);
- return put_user(block, (int *) arg);
+
+ tmp.b_state = 0;
+ tmp.b_blocknr = 0;
+ inode->i_op->get_block(inode, block,
+ &tmp, FS_GETBLOCK_MAP);
+ return put_user(tmp.b_blocknr, (int *) arg);
+ }
case FIGETBSZ:
if (inode->i_sb == NULL)
return -EBADF;
--- linux/kernel/sysctl.c.orig Sat Jun 26 14:54:30 1999
+++ linux/kernel/sysctl.c Sat Jun 26 15:29:38 1999
@@ -120,7 +120,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
NULL, /* flushpage */
--- linux/mm/swap_state.c.orig Sat Jun 26 14:54:30 1999
+++ linux/mm/swap_state.c Sat Jun 26 15:29:38 1999
@@ -39,7 +39,7 @@
NULL, /* rename */
NULL, /* readlink */
NULL, /* follow_link */
- NULL, /* bmap */
+ NULL, /* get_block */
NULL, /* readpage */
NULL, /* writepage */
block_flushpage, /* flushpage */
--- linux/mm/page_io.c.orig Sat Jun 26 14:54:30 1999
+++ linux/mm/page_io.c Sat Jun 26 15:29:38 1999
@@ -99,7 +99,7 @@
} else if (p->swap_file) {
struct inode *swapf = p->swap_file->d_inode;
int i;
- if (swapf->i_op->bmap == NULL
+ if (swapf->i_op->get_block == NULL
&& swapf->i_op->smap != NULL){
/*
With MS-DOS, we use msdos_smap which returns
@@ -110,7 +110,7 @@
It sounds like ll_rw_swap_file defined
its operation size (sector size) based on
PAGE_SIZE and the number of blocks to read.
- So using bmap or smap should work even if
+ So using get_block or smap should work even if
smap will require more blocks.
*/
int j;
--- linux/include/linux/fs.h.orig Sat Jun 26 14:54:30 1999
+++ linux/include/linux/fs.h Sat Jun 26 15:29:38 1999
@@ -188,9 +188,9 @@
#define BH_Dirty 1 /* 1 if the buffer is dirty */
#define BH_Lock 2 /* 1 if the buffer is locked */
#define BH_Req 3 /* 0 if the buffer has been invalidated */
-#define BH_Allocated 4 /* 1 if the buffer has allocated backing store */
-#define BH_Protected 6 /* 1 if the buffer is protected */
-
+#define BH_Mapped 4 /* b_blocknr is a cached block mapping value */
+#define BH_New 5 /* buffer got freshly allocated by the fs */
+#define BH_Hole 6 /* buffer is an fs hole */
/*
* Try to keep the most commonly used fields in single cache lines (16
* bytes) to improve performance. This ordering should be
@@ -242,8 +242,9 @@
#define buffer_dirty(bh) __buffer_state(bh,Dirty)
#define buffer_locked(bh) __buffer_state(bh,Lock)
#define buffer_req(bh) __buffer_state(bh,Req)
-#define buffer_allocated(bh) __buffer_state(bh,Allocated)
-#define buffer_protected(bh) __buffer_state(bh,Protected)
+#define buffer_mapped(bh) __buffer_state(bh,Mapped)
+#define buffer_new(bh) __buffer_state(bh,New)
+#define buffer_hole(bh) __buffer_state(bh,Hole)

#define buffer_page(bh) (mem_map + MAP_NR((bh)->b_data))
#define touch_buffer(bh) set_bit(PG_referenced, &buffer_page(bh)->flags)
@@ -585,6 +586,9 @@
int (*lock) (struct file *, int, struct file_lock *);
};

+#define FS_GETBLOCK_MAP 0
+#define FS_GETBLOCK_NEW 1
+
struct inode_operations {
struct file_operations * default_file_ops;
int (*create) (struct inode *,struct dentry *,int);
@@ -601,13 +605,19 @@
struct dentry * (*follow_link) (struct dentry *, struct dentry *, unsigned int);
/*
* the order of these functions within the VFS template has been
- * changed because SMP locking has changed: from now on all bmap,
+ * changed because SMP locking has changed: from now on all get_block,
* readpage, writepage and flushpage functions are supposed to do
* whatever locking they need to get proper SMP operation - for
* now in most cases this means a lock/unlock_kernel at entry/exit.
* [The new order is also slightly more logical :)]
*/
- int (*bmap) (struct inode *,int);
+ /*
+ * Generic block allocator exported by the lowlevel fs. All metadata
+ * details are handled by the lowlevel fs, all 'logical data content'
+ * details are handled by the highlevel block layer.
+ */
+ int (*get_block) (struct inode *, long, struct buffer_head *, int);
+
int (*readpage) (struct file *, struct page *);
int (*writepage) (struct file *, struct page *);
int (*flushpage) (struct inode *, struct page *, unsigned long);
@@ -751,12 +761,28 @@
#define BUF_DIRTY 2 /* Dirty buffers, not yet scheduled for write */
#define NR_LIST 3

-void mark_buffer_uptodate(struct buffer_head *, int);
+/*
+ * This is called by bh->b_end_io() handlers when I/O has completed.
+ */
+extern inline void mark_buffer_uptodate(struct buffer_head * bh, int on)
+{
+ if (on)
+ set_bit(BH_Uptodate, &bh->b_state);
+ else
+ clear_bit(BH_Uptodate, &bh->b_state);
+}
+
+#define atomic_set_buffer_clean(bh) test_and_clear_bit(BH_Dirty, &(bh)->b_state)
+
+extern inline void __mark_buffer_clean(struct buffer_head *bh)
+{
+ refile_buffer(bh);
+}

extern inline void mark_buffer_clean(struct buffer_head * bh)
{
- if (test_and_clear_bit(BH_Dirty, &bh->b_state))
- refile_buffer(bh);
+ if (atomic_set_buffer_clean(bh))
+ __mark_buffer_clean(bh);
}

extern void FASTCALL(__mark_buffer_dirty(struct buffer_head *bh, int flag));
@@ -872,16 +898,12 @@

extern int brw_page(int, struct page *, kdev_t, int [], int, int);

-typedef long (*writepage_t)(struct file *, struct page *, unsigned long, unsigned long, const char *);
-typedef int (*fs_getblock_t)(struct inode *, unsigned long, struct buffer_head *, unsigned int);
-
-#define FS_GETBLK_ALLOCATE 1
-#define FS_GETBLK_UPDATE 2
+typedef int (*writepage_t)(struct file *, struct page *, unsigned long, unsigned long, const char *);

/* Generic buffer handling for block filesystems.. */
extern int block_read_full_page(struct file *, struct page *);
-extern int block_write_full_page (struct file *, struct page *, fs_getblock_t);
-extern int block_write_partial_page (struct file *, struct page *, unsigned long, unsigned long, const char *, fs_getblock_t);
+extern int block_write_full_page (struct file *, struct page *);
+extern int block_write_partial_page (struct file *, struct page *, unsigned long, unsigned long, const char *);
extern int block_flushpage(struct inode *, struct page *, unsigned long);

extern int generic_file_mmap(struct file *, struct vm_area_struct *);
--- linux/include/linux/ext2_fs.h.orig Sat Jun 26 14:54:30 1999
+++ linux/include/linux/ext2_fs.h Sat Jun 26 15:29:39 1999
@@ -553,7 +553,8 @@
extern void ext2_check_inodes_bitmap (struct super_block *);

/* inode.c */
-extern int ext2_bmap (struct inode *, int);
+extern long ext2_bmap (struct inode *, long);
+extern int ext2_get_block (struct inode *, long, struct buffer_head *, int);

extern struct buffer_head * ext2_getblk (struct inode *, long, int, int *);
extern int ext2_getblk_block (struct inode *, long, int, int *, int *);
--- linux/drivers/block/raid5.c.orig Sat Jun 26 14:56:47 1999
+++ linux/drivers/block/raid5.c Sat Jun 26 15:29:39 1999
@@ -594,7 +594,7 @@
bh->b_rdev = raid_conf->disks[i].dev;
bh->b_rsector = sh->sector;

- bh->b_state = (1 << BH_Req) | (1 << BH_Allocated);
+ bh->b_state = (1 << BH_Req) | (1 << BH_Mapped);
bh->b_size = sh->size;
bh->b_list = BUF_LOCKED;
}
\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.100 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site