lkml.org 
[lkml]   [2004]   [Mar]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: 2.6.4-mm2
    Jens Axboe <axboe@suse.de> wrote:
    >
    > > Is it not the case that two dm maps can refer to the same queue? Say, one
    > > map uses /dev/hda1 and another map uses /dev/hda2?
    > >
    > > If so, then when the /dev/hda queue is plugged we need to tell both the
    > > higher-level maps that this queue needs an unplug. So blk_plug_device()
    > > and the various unplug functions need to perform upcalls to an arbitrary
    > > number of higher-level drivers, and those drivers need to keep track of the
    > > currently-plugged queues without adding data structures to the
    > > request_queue structure.
    > >
    > > It can be done of course, but could get messy.
    >
    > That would get nasty, it's much more natural to track it from the other
    > end. I view it as a dm (or whatever problem) that they need to track who
    > has pending io on their behalf, which is pretty easy to to from eg
    > __map_bio().

    But dm doesn't know enough. Suppose it is managing a map which includes
    /dev/hda1 and I do some I/O against /dev/hda2 which plugs the queue. dm
    needs to know about that plug.

    Actually the data structure isn't toooo complex.

    - In the request_queue add a list_head of "interested drivers"

    - In a dm map, add:

    struct interested_driver {
    list_head list;
    void (*plug_upcall)(struct request_queue *q, void *private);
    void (*unplug_upcall)(struct request_queue *q, void *private);
    void *private;
    }

    and when setting up a map, dm does:

    blk_register_interested_driver(struct request_queue *q,
    struct interested_driver *d)
    {
    list_add(&d->list, q->interested_driver_list);
    }

    - In blk_device_plug():

    list_for_each_entry(d, q->interested_driver_list, list) {
    (*d->plug_upcall)(q, d->private);
    }

    - Similar in the unplug functions.


    And in dm, maintain a dynamically allocated array of request_queue*'s:

    dm_plug_upcall(struct request_queue *q, void *private)
    {
    map = private;

    map->plugged_queues[map->nr_plugged_queues++] = q;
    }

    dm_unplug_upcall(struct request_queue *q, void *private)
    {
    map = private;

    for (i = 0; i < map->nr_plugged_queues; i++) {
    if (map->plugged_queues[i] == q) {
    memcpy(&map->plugged_queues[i],
    &map->plugged_queues[i+1],
    <whatever>
    }
    }
    }

    unplug_upcall is a bit sucky, but they're much less frequent than the
    current unplug downcalls.

    Frankly, I wouldn't bother. 0.5% CPU when hammering the crap out of a
    56-disk LVM isn't too bad. I'd suggest you first try to simply reduce the
    number of cache misses in the inner loop, see what that leaves us with.


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:01    [W:4.141 / U:0.780 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site