lkml.org 
[lkml]   [2019]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe:Re: [PATCH] make blk_mq_map_queues more friendly for cpu topology



Actually, I just bought one vm from public cloud provider and run into this problem.
after reading code and compare pci device info, I reproduce this scenario.

Since common users cannot change msi vector numbers, so I suggest blk_mq_map_queues to be
more friendly. blk_mq_map_queues may be the last choice.





At 2019-03-27 16:16:19, "Christoph Hellwig" <hch@lst.de> wrote:
>On Tue, Mar 26, 2019 at 03:55:10PM +0800, luferry wrote:
>>
>>
>>
>> At 2019-03-26 15:39:54, "Christoph Hellwig" <hch@lst.de> wrote:
>> >Why isn't this using the automatic PCI-level affinity assignment to
>> >start with?
>>
>> When enable virtio-blk with multi queues but with only 2 msix-vector.
>> vp_dev->per_vq_vectors will be false, vp_get_vq_affintity will return NULL directly
>> so blk_mq_virtio_map_queues will fallback to blk_mq_map_queues.
>
>What is the point of the multiqueue mode if you don't have enough
>(virtual) MSI-X vectors?
\
 
 \ /
  Last update: 2019-03-27 10:33    [W:0.054 / U:4.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site