lkml.org 
[lkml]   [2000]   [Aug]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Date
SubjectRe: PATCH - final (I hope) tidy up for generic_make_request interface
On Thursday August 24, torvalds@transmeta.com wrote:
>
>
> On Thu, 24 Aug 2000, Neil Brown wrote:
> > >
> > > Which implies that we should not have it in the generic path, but in the
> > > per-queue path.
> >
> > I'm beginning to see your point...
> >
> > Maybe we should then have a "blk_queue_highmem_capable(q,flag)"
> > function whereby drivers can tell elevator_make_request that they
> > are highmem aware, and so the create_bounce will be skipped ??
>
> No. No flags. That's the mistake SCSI did, and it is a fundamental
> mistake. It makes it basically impossible to "shift out" old code.

Ok. I was just generalising from "blk_queue_{headactive,pluggable)".

>
> > So, in summary:
> > I was advocating the "keep it simple for now" position advocated by
> > the comment, and applying the one solution to all drivers.
>
> I agree with that notion. I don't think your patch was a bad one. I just
> think that it would make it a more straightforward operation later when we
> _do_ start to make it aware of clever devices to not do it the way you
> did.
>
> Linus

Ok, try this. As well was what was described in the previous patch
it:

1/ left the create_bounce call where it was.
2/ defined bh_kmap and bh_kunmap which map a page for a bh, and
return a pointer to an appropriate offset. I would REALLY like
someone else who is more failiar with highmem to confirm this.
3/ modified raid5 to use bh_k{,un}map where appropriate.
4/ Fixed a couple of little bugs in raid1 where blocknr was set to
rsector*sectors instead of rsector/sectors. This would have
caused problems when a read failed and was retried on a mirror.
5/ renamed generic_make_request to raw_rw_block. The former name
just didn't seem to make sense any more.
6/ documented ll_rw_block and raw_rw_block (please someone (Jens?)
- proof read it for me.).
7/ Added a comment to doco for blk_queue_make_request to the effect
that drivers need to be aware of highmem issues.
8/ removed the setting of BH_Lock and BH_Mapped when a buffer is
found to have a mapping out side of the device. These bits must
be set by this stage anyway (or aren't needed).
9/ tested.

I'm really feeling a need to document the "struct buffer_head"
structure, just to clarify all that I have learnt lately, but I
haven't managed to figure out the point for BH_Req. What does it mean
for a buffer to be invalidated, and (more significantly) when would
you find an invalid buffer when you wouldn't know simply from how you
found it whether it was invalid or not.

NeilBrown


--- ./fs/buffer.c 2000/08/25 01:36:51 1.1
+++ ./fs/buffer.c 2000/08/25 02:34:30
@@ -1839,7 +1839,6 @@
int pageind;
int bhind;
int offset;
- int sectors = size>>9;
unsigned long blocknr;
struct kiobuf * iobuf = NULL;
struct page * map;
@@ -1891,9 +1890,8 @@
tmp->b_this_page = tmp;

init_buffer(tmp, end_buffer_io_kiobuf, iobuf);
- tmp->b_rdev = tmp->b_dev = dev;
+ tmp->b_dev = dev;
tmp->b_blocknr = blocknr;
- tmp->b_rsector = blocknr*sectors;
tmp->b_state = (1 << BH_Mapped) | (1 << BH_Lock) | (1 << BH_Req);

if (rw == WRITE) {
@@ -1907,7 +1905,7 @@

atomic_inc(&iobuf->io_count);

- generic_make_request(rw, tmp);
+ raw_rw_block(rw, tmp);
/*
* Wait for IO if we have got too much
*/
--- ./include/linux/highmem.h 2000/08/25 01:53:38 1.1
+++ ./include/linux/highmem.h 2000/08/25 02:00:45
@@ -17,6 +17,21 @@
extern struct page * replace_with_highmem(struct page *);
extern struct buffer_head * create_bounce(int rw, struct buffer_head * bh_orig);

+static __inline__ char *bh_kmap(struct buffer_head *bh)
+{
+ unsigned long vaddr;
+ if (!PageHighMem(bh->b_page))
+ return bh->b_data;
+ vaddr = kmap(bh->b_page);
+ return (char*)vaddr + bh_offset(bh);
+}
+
+static __inline__ void bh_kunmap(struct buffer_head *bh)
+{
+ if (PageHighMem(bh->b_page))
+ kunmap(bh->b_page);
+}
+
#else /* CONFIG_HIGHMEM */

extern inline unsigned int nr_free_highpages(void) { return 0; }
@@ -31,6 +46,12 @@

#define kmap_atomic(page,idx) kmap(page)
#define kunmap_atomic(page,idx) kunmap(page)
+
+static __inline__ char *bh_kmap(struct buffer_head *bh)
+{
+ return bh->b_data;
+}
+#define bh_kunmap(bh) do { } while (0)

#endif /* CONFIG_HIGHMEM */

--- ./include/linux/blkdev.h 2000/08/25 02:34:40 1.1
+++ ./include/linux/blkdev.h 2000/08/25 02:34:44
@@ -150,7 +150,7 @@
extern struct blk_dev_struct blk_dev[MAX_BLKDEV];
extern void grok_partitions(struct gendisk *dev, int drive, unsigned minors, long size);
extern void register_disk(struct gendisk *dev, kdev_t first, unsigned minors, struct block_device_operations *ops, long size);
-extern void generic_make_request(int rw, struct buffer_head * bh);
+extern void raw_rw_block(int rw, struct buffer_head * bh);
extern request_queue_t *blk_get_queue(kdev_t dev);
extern void blkdev_release_request(struct request *);

--- ./drivers/block/ll_rw_blk.c 2000/08/25 01:36:51 1.1
+++ ./drivers/block/ll_rw_blk.c 2000/08/25 04:36:30
@@ -262,13 +262,18 @@
*
* Description:
* The normal way for &struct buffer_heads to be passed to a device driver
- * it to collect into requests on a request queue, and allow the device
- * driver to select requests off that queue when it is ready. This works
- * well for many block devices. However some block devices (typically
- * virtual devices such as md or lvm) do not benefit from the processes on
+ * is for them to be collected into requests on a request queue, and then to
+ * allow the device driver to select requests off that queue when it is ready.
+ * This works well for many block devices. However some block devices (typically
+ * virtual devices such as md or lvm) do not benefit from the processing on
* the request queue, and are served best by having the requests passed
* directly to them. This can be achieved by providing a function to
* blk_queue_make_request().
+ *
+ * Caveat:
+ * The driver that does this *must* be able to deal appropriately with
+ * buffers in "highmemory", either by calling bh_kmap() to get a kernel mapping,
+ * to by calling create_bounce() to create a buffer in normal memory.
**/

void blk_queue_make_request(request_queue_t * q, make_request_fn * mfn)
@@ -417,7 +422,7 @@
* blk_init_queue() must be paired with a blk_cleanup-queue() call
* when the block device is deactivated (such as at module unload).
**/
-static int __make_request(request_queue_t * q, int rw, struct buffer_head * bh);
+static int elevator_make_request(request_queue_t * q, int rw, struct buffer_head * bh);
void blk_init_queue(request_queue_t * q, request_fn_proc * rfn)
{
INIT_LIST_HEAD(&q->queue_head);
@@ -429,7 +434,7 @@
q->back_merge_fn = ll_back_merge_fn;
q->front_merge_fn = ll_front_merge_fn;
q->merge_requests_fn = ll_merge_requests_fn;
- q->make_request_fn = __make_request;
+ q->make_request_fn = elevator_make_request;
q->plug_tq.sync = 0;
q->plug_tq.routine = &generic_unplug_device;
q->plug_tq.data = q;
@@ -676,7 +681,7 @@
attempt_merge(q, blkdev_entry_to_request(prev), max_sectors, max_segments);
}

-static int __make_request(request_queue_t * q, int rw,
+static int elevator_make_request(request_queue_t * q, int rw,
struct buffer_head * bh)
{
unsigned int sector, count;
@@ -703,13 +708,6 @@
goto end_io;
}

- /* We'd better have a real physical mapping!
- Check this bit only if the buffer was dirty and just locked
- down by us so at this point flushpage will block and
- won't clear the mapped bit under us. */
- if (!buffer_mapped(bh))
- BUG();
-
/*
* Temporary solution - in 2.5 this will be done by the lowlevel
* driver. Create a bounce buffer if the buffer data points into
@@ -829,19 +827,53 @@
return 0;
}

-void generic_make_request (int rw, struct buffer_head * bh)
+/**
+ * raw_rw_block: hand a buffer head to it's device driver for I/O
+ * @rw: READ, WRITE, or READA - what sort of I/O is desired.
+ * @bh: The buffer head describing the location in memory and on the device.
+ *
+ * raw_rw_block() is used to make I/O requests of block devices. It is passed
+ * a &struct buffer_head and a &rw value. The %READ and %WRITE options are
+ * (hopefully) obvious in meaning. The %READA value means that a read is required,
+ * but that the driver is free to fail the request if, for example, it cannot get
+ * needed resources immediately.
+ *
+ * raw_rw_block() does not return any status. The success/failure status of the
+ * request, along with notification of completion, is delivered asynchronously through
+ * the bh->b_end_io function described (one day) else where.
+ *
+ * The caller of raw_rw_block must make sure that b_page, b_addr, b_size are set to
+ * describe the memory buffer, that b_dev and b_blocknr are set to describe the
+ * device address, and the b_end_io and optionally b_private are set to describe how
+ * completion notification should be signaled. BH_Mapped should also be set
+ * (to confirm that b_dev and b_blocknr are valid).
+ *
+ * raw_rw_block and the drivers it calls may use b_rdev, b_rsector and b_reqnext,
+ * so the values of these should be be depended upon after the call the raw_rw_block.
+ * No other fields, and in particular, no other flags, are changed by raw_rw_block
+ * or any lower level drivers.
+ *
+ */
+void raw_rw_block (int rw, struct buffer_head * bh)
{
- int major = MAJOR(bh->b_rdev);
+ int major = MAJOR(bh->b_dev);
request_queue_t *q;
+ unsigned long sector;
+ unsigned int count;
+
+
+ /* We'd better have a real physical mapping! */
+ if (!buffer_mapped(bh))
+ BUG();
+
+ count = bh->b_size >> 9;
+ bh->b_rsector = sector = bh->b_blocknr * count;
+ bh->b_rdev = bh->b_dev;
+
if (blk_size[major]) {
unsigned long maxsector = (blk_size[major][MINOR(bh->b_rdev)] << 1) + 1;
- unsigned int sector, count;
-
- count = bh->b_size >> 9;
- sector = bh->b_rsector;

if (maxsector < count || maxsector - count < sector) {
- bh->b_state &= (1 << BH_Lock) | (1 << BH_Mapped);
if (blk_size[major][MINOR(bh->b_rdev)]) {

/* This may well happen - the kernel calls bread()
@@ -871,19 +903,49 @@
q = blk_get_queue(bh->b_rdev);
if (!q) {
printk(KERN_ERR
- "generic_make_request: Trying to access nonexistent block-device %s (%ld)\n",
+ "raw_rw_block: Trying to access nonexistent block-device %s (%ld)\n",
kdevname(bh->b_rdev), bh->b_rsector);
buffer_IO_error(bh);
break;
}

+ if ((rw & WRITE) && is_read_only(bh->b_rdev)) {
+ printk(KERN_NOTICE "Can't write to read-only device %s\n",
+ kdevname(bh->b_rdev));
+ buffer_IO_error(bh);
+ break;
+ }
+
}
while (q->make_request_fn(q, rw, bh));
}

-/* This function can be used to request a number of buffers from a block
- device. Currently the only restriction is that all buffers must belong to
- the same device */
+/**
+ * ll_rw_block: low-level access to block devices
+ * @rw: whether to %READ or %WRITE or maybe %READA (readahead)
+ * @nr: number of &struct buffer_heads in the array
+ * @bhs: array of pointers to &struct buffer_head
+ *
+ * ll_rw_block() takes an array of pointers to &struct buffer_heads, and
+ * requests an I/O operation on them, either a %READ or a %WRITE. The third %READA
+ * option is described in the documentation for raw_rw_block() which ll_rw_block()
+ * calls.
+ *
+ * This function provides extra functionality that is not in raw_rw_block() that is
+ * relevant to buffers in the buffer cache or page cache.
+ * In particular it drops any buffer that it cannot get an lock on (with the
+ * BH_Lock state bit), any buffer that appears to be clean when doing a write request,
+ * and any buffer that appears to be up-to-date when doing read request.
+ * Further it marks as clean buffers that are processed for writing (the buffer cache
+ * wont assume that they are actually clean until the buffer gets unlocked).
+ *
+ * The comments in the documentation for raw_rw_block() about what fields need to be
+ * set up apply to ll_rw_block as well.
+ *
+ * Caveat:
+ * All of the buffers must be for the same device, and must also be of the current
+ * approved size for the device.
+ */

void ll_rw_block(int rw, int nr, struct buffer_head * bhs[])
{
@@ -914,12 +976,6 @@
}
}

- if ((rw & WRITE) && is_read_only(bhs[0]->b_dev)) {
- printk(KERN_NOTICE "Can't write to read-only device %s\n",
- kdevname(bhs[0]->b_dev));
- goto sorry;
- }
-
for (i = 0; i < nr; i++) {
bh = bhs[i];

@@ -953,14 +1009,7 @@

}

- /*
- * First step, 'identity mapping' - RAID or LVM might
- * further remap this.
- */
- bh->b_rdev = bh->b_dev;
- bh->b_rsector = bh->b_blocknr * (bh->b_size>>9);
-
- generic_make_request(rw, bh);
+ raw_rw_block(rw, bh);
}
return;

@@ -1164,5 +1213,5 @@
EXPORT_SYMBOL(blk_queue_headactive);
EXPORT_SYMBOL(blk_queue_pluggable);
EXPORT_SYMBOL(blk_queue_make_request);
-EXPORT_SYMBOL(generic_make_request);
+EXPORT_SYMBOL(raw_rw_block);
EXPORT_SYMBOL(blkdev_release_request);
--- ./drivers/block/rd.c 2000/08/25 01:36:51 1.1
+++ ./drivers/block/rd.c 2000/08/25 01:37:10
@@ -226,7 +226,7 @@
}

sbh = CURRENT->bh;
- rbh = getblk(sbh->b_dev, sbh->b_blocknr, sbh->b_size);
+ rbh = getblk(sbh->b_rdev, sbh->b_rsector/(sbh->b_size>>9), sbh->b_size);
if (CURRENT->cmd == READ) {
if (sbh != rbh)
memcpy(CURRENT->buffer, rbh->b_data, rbh->b_size);
--- ./drivers/block/raid1.c 2000/08/25 01:36:51 1.1
+++ ./drivers/block/raid1.c 2000/08/25 02:35:07
@@ -371,7 +371,7 @@
{
struct buffer_head *bh = r1_bh->master_bh;

- io_request_done(bh->b_rsector, mddev_to_conf(r1_bh->mddev),
+ io_request_done(bh->b_blocknr*(bh->b_size>>9), mddev_to_conf(r1_bh->mddev),
test_bit(R1BH_SyncPhase, &r1_bh->state));

bh->b_end_io(bh, uptodate);
@@ -599,13 +599,11 @@

bh_req = &r1_bh->bh_req;
memcpy(bh_req, bh, sizeof(*bh));
- bh_req->b_blocknr = bh->b_rsector * sectors;
+ bh_req->b_blocknr = bh->b_rsector / sectors;
bh_req->b_dev = mirror->dev;
- bh_req->b_rdev = mirror->dev;
- /* bh_req->b_rsector = bh->n_rsector; */
bh_req->b_end_io = raid1_end_request;
bh_req->b_private = r1_bh;
- generic_make_request (rw, bh_req);
+ raw_rw_block (rw, bh_req);
return 0;
}

@@ -643,10 +641,8 @@
/*
* prepare mirrored mbh (fields ordered for max mem throughput):
*/
- mbh->b_blocknr = bh->b_rsector * sectors;
+ mbh->b_blocknr = bh->b_rsector / sectors;
mbh->b_dev = conf->mirrors[i].dev;
- mbh->b_rdev = conf->mirrors[i].dev;
- mbh->b_rsector = bh->b_rsector;
mbh->b_state = (1<<BH_Req) | (1<<BH_Dirty) |
(1<<BH_Mapped) | (1<<BH_Lock);

@@ -680,7 +676,7 @@
while(bh) {
struct buffer_head *bh2 = bh;
bh = bh->b_next;
- generic_make_request(rw, bh2);
+ raw_rw_block(rw, bh2);
}
return (0);
}
@@ -1128,7 +1124,6 @@
int disks = MD_SB_DISKS;
struct buffer_head *bhl, *mbh;
raid1_conf_t *conf;
- int sectors = bh->b_size >> 9;

conf = mddev_to_conf(mddev);
bhl = raid1_alloc_bh(conf, conf->raid_disks); /* don't really need this many */
@@ -1157,8 +1152,6 @@
*/
mbh->b_blocknr = bh->b_blocknr;
mbh->b_dev = conf->mirrors[i].dev;
- mbh->b_rdev = conf->mirrors[i].dev;
- mbh->b_rsector = bh->b_blocknr * sectors;
mbh->b_state = (1<<BH_Req) | (1<<BH_Dirty) |
(1<<BH_Mapped) | (1<<BH_Lock);
atomic_set(&mbh->b_count, 1);
@@ -1180,8 +1173,8 @@
while (mbh) {
struct buffer_head *bh1 = mbh;
mbh = mbh->b_next;
- generic_make_request(WRITE, bh1);
- md_sync_acct(bh1->b_rdev, bh1->b_size/512);
+ raw_rw_block(WRITE, bh1);
+ md_sync_acct(bh1->b_dev, bh1->b_size/512);
}
} else {
dev = bh->b_dev;
@@ -1192,8 +1185,7 @@
} else {
printk (REDIRECT_SECTOR,
partition_name(bh->b_dev), bh->b_blocknr);
- bh->b_rdev = bh->b_dev;
- generic_make_request(READ, bh);
+ raw_rw_block(READ, bh);
}
}

@@ -1209,8 +1201,7 @@
} else {
printk (REDIRECT_SECTOR,
partition_name(bh->b_dev), bh->b_blocknr);
- bh->b_rdev = bh->b_dev;
- generic_make_request (r1_bh->cmd, bh);
+ raw_rw_block (r1_bh->cmd, bh);
}
break;
}
@@ -1392,7 +1383,6 @@
bh->b_size = bsize;
bh->b_list = BUF_LOCKED;
bh->b_dev = mirror->dev;
- bh->b_rdev = mirror->dev;
bh->b_state = (1<<BH_Req) | (1<<BH_Mapped);
if (!bh->b_page)
BUG();
@@ -1402,11 +1392,10 @@
BUG();
bh->b_end_io = end_sync_read;
bh->b_private = r1_bh;
- bh->b_rsector = block_nr<<1;
init_waitqueue_head(&bh->b_wait);

- generic_make_request(READ, bh);
- md_sync_acct(bh->b_rdev, bh->b_size/512);
+ raw_rw_block(READ, bh);
+ md_sync_acct(bh->b_dev, bh->b_size/512);

return (bsize >> 10);

--- ./drivers/block/raid5.c 2000/08/25 01:36:51 1.1
+++ ./drivers/block/raid5.c 2000/08/25 02:35:20
@@ -644,9 +644,6 @@
bh->b_data = b_data;
bh->b_page = b_page;

- bh->b_rdev = conf->disks[i].dev;
- bh->b_rsector = sh->sector;
-
bh->b_state = (1 << BH_Req) | (1 << BH_Mapped);
bh->b_size = sh->size;
bh->b_list = BUF_LOCKED;
@@ -855,6 +852,7 @@
raid5_conf_t *conf = sh->raid_conf;
int i, pd_idx = sh->pd_idx, disks = conf->raid_disks, count;
struct buffer_head *bh_ptr[MAX_XOR_BLOCKS];
+ char *bdata;

PRINTK("compute_parity, stripe %lu, method %d\n", sh->sector, method);
for (i = 0; i < disks; i++) {
@@ -864,7 +862,9 @@
sh->bh_copy[i] = raid5_alloc_buffer(sh, sh->size);
raid5_build_block(sh, sh->bh_copy[i], i);
atomic_set_buffer_dirty(sh->bh_copy[i]);
- memcpy(sh->bh_copy[i]->b_data, sh->bh_new[i]->b_data, sh->size);
+ bdata = bh_kmap(sh->bh_new[i]);
+ memcpy(sh->bh_copy[i]->b_data, bdata, sh->size);
+ bh_kunmap(sh->bh_new[i]);
}
if (sh->bh_copy[pd_idx] == NULL) {
sh->bh_copy[pd_idx] = raid5_alloc_buffer(sh, sh->size);
@@ -1068,12 +1068,12 @@
if (!operational[i] && !conf->resync_parity) {
PRINTK("writing spare %d\n", i);
atomic_inc(&sh->nr_pending);
- bh->b_dev = bh->b_rdev = conf->spare->dev;
- generic_make_request(WRITE, bh);
+ bh->b_dev = conf->spare->dev;
+ raw_rw_block(WRITE, bh);
} else {
atomic_inc(&sh->nr_pending);
- bh->b_dev = bh->b_rdev = conf->disks[i].dev;
- generic_make_request(WRITE, bh);
+ bh->b_dev = conf->disks[i].dev;
+ raw_rw_block(WRITE, bh);
}
atomic_dec(&bh->b_count);
}
@@ -1111,8 +1111,8 @@
continue;
lock_get_bh(sh->bh_old[i]);
atomic_inc(&sh->nr_pending);
- sh->bh_old[i]->b_dev = sh->bh_old[i]->b_rdev = conf->disks[i].dev;
- generic_make_request(READ, sh->bh_old[i]);
+ sh->bh_old[i]->b_dev = conf->disks[i].dev;
+ raw_rw_block(READ, sh->bh_old[i]);
atomic_dec(&sh->bh_old[i]->b_count);
}
PRINTK("handle_stripe() %lu, reading %d old buffers\n", sh->sector, md_atomic_read(&sh->nr_pending));
@@ -1128,6 +1128,7 @@
{
int i;
int method1 = INT_MAX;
+ char *bdata;

method1 = nr_read - nr_cache_overwrite;

@@ -1140,7 +1141,9 @@
continue;
if (!sh->bh_old[i])
compute_block(sh, i);
- memcpy(sh->bh_new[i]->b_data, sh->bh_old[i]->b_data, sh->size);
+ bdata = bh_kmap(sh->bh_new[i]);
+ memcpy(bdata, sh->bh_old[i]->b_data, sh->size);
+ bh_kunmap(sh->bh_new[i]);
}
complete_stripe(sh);
return;
@@ -1156,8 +1159,8 @@
raid5_build_block(sh, sh->bh_old[i], i);
lock_get_bh(sh->bh_old[i]);
atomic_inc(&sh->nr_pending);
- sh->bh_old[i]->b_dev = sh->bh_old[i]->b_rdev = conf->disks[i].dev;
- generic_make_request(READ, sh->bh_old[i]);
+ sh->bh_old[i]->b_dev = conf->disks[i].dev;
+ raw_rw_block(READ, sh->bh_old[i]);
atomic_dec(&sh->bh_old[i]->b_count);
}
PRINTK("handle_stripe() %lu, phase READ_OLD, pending %d buffers\n", sh->sector, md_atomic_read(&sh->nr_pending));
@@ -1168,7 +1171,9 @@
if (!sh->bh_new[i])
continue;
if (sh->bh_old[i]) {
- memcpy(sh->bh_new[i]->b_data, sh->bh_old[i]->b_data, sh->size);
+ bdata = bh_kmap(sh->bh_new[i]);
+ memcpy(bdata, sh->bh_old[i]->b_data, sh->size);
+ bh_kunmap(sh->bh_new[i]);
continue;
}
#if RAID5_PARANOIA
@@ -1185,8 +1190,8 @@
#endif
lock_get_bh(sh->bh_req[i]);
atomic_inc(&sh->nr_pending);
- sh->bh_req[i]->b_dev = sh->bh_req[i]->b_rdev = conf->disks[i].dev;
- generic_make_request(READ, sh->bh_req[i]);
+ sh->bh_req[i]->b_dev = conf->disks[i].dev;
+ raw_rw_block(READ, sh->bh_req[i]);
atomic_dec(&sh->bh_req[i]->b_count);
}
PRINTK("handle_stripe() %lu, phase READ, pending %d\n", sh->sector, md_atomic_read(&sh->nr_pending));
@@ -1221,9 +1226,9 @@
raid5_build_block(sh, bh, i);
lock_get_bh(bh);
atomic_inc(&sh->nr_pending);
- bh->b_dev = bh->b_rdev = conf->disks[i].dev;
- generic_make_request(READ, bh);
- md_sync_acct(bh->b_rdev, bh->b_size/512);
+ bh->b_dev = conf->disks[i].dev;
+ raw_rw_block(READ, bh);
+ md_sync_acct(bh->b_dev, bh->b_size/512);
atomic_dec(&sh->bh_old[i]->b_count);
}
PRINTK("handle_stripe_sync() %lu, phase READ_OLD, pending %d buffers\n", sh->sector, md_atomic_read(&sh->nr_pending));
@@ -1250,9 +1255,9 @@
}
atomic_inc(&sh->nr_pending);
lock_get_bh(bh);
- bh->b_dev = bh->b_rdev = conf->spare->dev;
- generic_make_request(WRITE, bh);
- md_sync_acct(bh->b_rdev, bh->b_size/512);
+ bh->b_dev = conf->spare->dev;
+ raw_rw_block(WRITE, bh);
+ md_sync_acct(bh->b_dev, bh->b_size/512);
atomic_dec(&bh->b_count);
PRINTK("handle_stripe_sync() %lu, phase WRITE, pending %d buffers\n", sh->sector, md_atomic_read(&sh->nr_pending));
}
@@ -1276,9 +1281,9 @@
atomic_set_buffer_dirty(bh);
lock_get_bh(bh);
atomic_inc(&sh->nr_pending);
- bh->b_dev = bh->b_rdev = conf->disks[pd_idx].dev;
- generic_make_request(WRITE, bh);
- md_sync_acct(bh->b_rdev, bh->b_size/512);
+ bh->b_dev = conf->disks[pd_idx].dev;
+ raw_rw_block(WRITE, bh);
+ md_sync_acct(bh->b_dev, bh->b_size/512);
atomic_dec(&bh->b_count);
PRINTK("handle_stripe_sync() %lu phase WRITE, pending %d buffers\n",
sh->sector, md_atomic_read(&sh->nr_pending));




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:38    [W:0.054 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site