lkml.org 
[lkml]   [2015]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 0/3] blk-mq & nvme: introduce .map_changed
From
On Wed, Sep 30, 2015 at 6:45 AM, Keith Busch <keith.busch@intel.com> wrote:
> On Tue, 29 Sep 2015, Ming Lei wrote:
>>
>> Yes, I thought of that before, but it has the following cons:
>>
>> - some drivers/devices may need different IRQ affinity policy, such as
>> virtio
>> devices which has its own set affinity handler(see
>> virtqueue_set_affinity()),
>
>
> That's not a very good example to support your cause; virtio_scsi's use
> is a perfect example for one that would benefit from letting blk-mq
> handle affinity. virtio_scsi sets affinity only when there is a 1:1
> mapping of cpu's to queue's, but this driver doesn't know the mapping
> that blk-mq used, creating a potentially less than optimal mapping.

The 1:1 mapping is introduced before blk-mq, and that doesn't mean we
have to do that for blk-mq.

Actualy I mean virtio-scsi just lets the 1st CPU of the cpumask handle
the virt-queue's irq, instead of all CPUs mapped to the hw queue(virt-queue).

>
>> - block core has to get the irq vector information which has to be
>> setup/finalized
>> before blk-mq uses that for setting irq affinity, for example, in case
>> NVMe's admin
>> queue, its vector can be changed after admin queue's initialization.
>
>
> Why do you want to put a hint on the admin queue's irq?

No, I don't want, and it is just a example, I mean other drivers/devices
may have this kind of situation too.

--
Ming Lei


\
 
 \ /
  Last update: 2015-09-30 02:41    [W:0.759 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site