lkml.org 
[lkml]   [2019]   [May]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 06/10] nvme/core: add mdev interfaces
From
Date
On Mon, 2019-05-06 at 11:31 +0300, Maxim Levitsky wrote:
> On Sat, 2019-05-04 at 08:49 +0200, Christoph Hellwig wrote:
> > On Fri, May 03, 2019 at 10:00:54PM +0300, Max Gurtovoy wrote:
> > > Don't see a big difference of taking NVMe queue and namespace/partition
> > > to
> > > guest OS or to P2P since IO is issued by external entity and pooled
> > > outside
> > > the pci driver.
> >
> > We are not going to the queue aside either way.. That is where the
> > last patch in this series is already working to, and which would be
> > the sensible vhost model to start with.
>
> Why are you saying that? I actualy prefer to use a sepearate queue per
> software
> nvme controller, tat because of lower overhead (about half than going through
> the block layer) and it better at QoS as the separate queue (or even few
> queues
> if needed) will give the guest a mostly guaranteed slice of the bandwidth of
> the
> device.

Sorry for typos - I need more coffee :-)

Best regards,
Maxim Levitsky

\
 
 \ /
  Last update: 2019-05-06 10:35    [W:0.065 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site