lkml.org 
[lkml]   [2024]   [Apr]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: 回复 : [PATCH v5] vp_vdpa: don't allocate unused msix vectors
    On Tue, Apr 23, 2024 at 01:39:17AM +0000, Gavin Liu wrote:
    > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
    > > From: Yuxue Liu <yuxue.liu@jaguarmicro.com>
    > >
    > > When there is a ctlq and it doesn't require interrupt callbacks,the
    > > original method of calculating vectors wastes hardware msi or msix
    > > resources as well as system IRQ resources.
    > >
    > > When conducting performance testing using testpmd in the guest os, it
    > > was found that the performance was lower compared to directly using
    > > vfio-pci to passthrough the device
    > >
    > > In scenarios where the virtio device in the guest os does not utilize
    > > interrupts, the vdpa driver still configures the hardware's msix
    > > vector. Therefore, the hardware still sends interrupts to the host os.
    >
    > >I just have a question on this part. How come hardware sends interrupts does not guest driver disable them?
    >
    > 1:Assuming the guest OS's Virtio device is using PMD mode, QEMU sets the call fd to -1
    > 2:On the host side, the vhost_vdpa program will set vp_vdpa->vring[i].cb.callback to invalid
    > 3:Before the modification, the vp_vdpa_request_irq function does not check whether
    > vp_vdpa->vring[i].cb.callback is valid. Instead, it enables the hardware's MSIX
    > interrupts based on the number of queues of the device
    >

    So MSIX is enabled but why would it trigger? virtio PMD in poll mode
    presumably suppresses interrupts after all.

    >
    >
    > ----- Original Message -----
    > From: Michael S. Tsirkin mst@redhat.com
    > Sent: April 22, 2024 20:09
    > To: Gavin Liu gavin.liu@jaguarmicro.com
    > Cc: jasowang@redhat.com; Angus Chen angus.chen@jaguarmicro.com; virtualization@lists.linux.dev; xuanzhuo@linux.alibaba.com; linux-kernel@vger.kernel.org; Heng Qi hengqi@linux.alibaba.com
    > Subject: Re: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
    >
    >
    >
    > External Mail: This email originated from OUTSIDE of the organization!
    > Do not click links, open attachments or provide ANY information unless you recognize the sender and know the content is safe.
    >
    >
    > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
    > > From: Yuxue Liu <yuxue.liu@jaguarmicro.com>
    > >
    > > When there is a ctlq and it doesn't require interrupt callbacks,the
    > > original method of calculating vectors wastes hardware msi or msix
    > > resources as well as system IRQ resources.
    > >
    > > When conducting performance testing using testpmd in the guest os, it
    > > was found that the performance was lower compared to directly using
    > > vfio-pci to passthrough the device
    > >
    > > In scenarios where the virtio device in the guest os does not utilize
    > > interrupts, the vdpa driver still configures the hardware's msix
    > > vector. Therefore, the hardware still sends interrupts to the host os.
    >
    > I just have a question on this part. How come hardware sends interrupts does not guest driver disable them?
    >
    > > Because of this unnecessary
    > > action by the hardware, hardware performance decreases, and it also
    > > affects the performance of the host os.
    > >
    > > Before modification:(interrupt mode)
    > > 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
    > > 33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
    > > 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2
    > > 35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config
    > >
    > > After modification:(interrupt mode)
    > > 32: 0 0 1 7 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
    > > 33: 36 0 3 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
    > > 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-config
    > >
    > > Before modification:(virtio pmd mode for guest os)
    > > 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
    > > 33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
    > > 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2
    > > 35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config
    > >
    > > After modification:(virtio pmd mode for guest os)
    > > 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-config
    > >
    > > To verify the use of the virtio PMD mode in the guest operating
    > > system, the following patch needs to be applied to QEMU:
    > > https://lore.kernel.org/all/20240408073311.2049-1-yuxue.liu@jaguarmicr
    > > o.com
    > >
    > > Signed-off-by: Yuxue Liu <yuxue.liu@jaguarmicro.com>
    > > Acked-by: Jason Wang <jasowang@redhat.com>
    > > Reviewed-by: Heng Qi <hengqi@linux.alibaba.com>
    > > ---
    > > V5: modify the description of the printout when an exception occurs
    > > V4: update the title and assign values to uninitialized variables
    > > V3: delete unused variables and add validation records
    > > V2: fix when allocating IRQs, scan all queues
    > >
    > > drivers/vdpa/virtio_pci/vp_vdpa.c | 22 ++++++++++++++++------
    > > 1 file changed, 16 insertions(+), 6 deletions(-)
    > >
    > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
    > > b/drivers/vdpa/virtio_pci/vp_vdpa.c
    > > index df5f4a3bccb5..8de0224e9ec2 100644
    > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
    > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
    > > @@ -160,7 +160,13 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
    > > struct pci_dev *pdev = mdev->pci_dev;
    > > int i, ret, irq;
    > > int queues = vp_vdpa->queues;
    > > - int vectors = queues + 1;
    > > + int vectors = 1;
    > > + int msix_vec = 0;
    > > +
    > > + for (i = 0; i < queues; i++) {
    > > + if (vp_vdpa->vring[i].cb.callback)
    > > + vectors++;
    > > + }
    > >
    > > ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
    > > if (ret != vectors) {
    > > @@ -173,9 +179,12 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
    > > vp_vdpa->vectors = vectors;
    > >
    > > for (i = 0; i < queues; i++) {
    > > + if (!vp_vdpa->vring[i].cb.callback)
    > > + continue;
    > > +
    > > snprintf(vp_vdpa->vring[i].msix_name, VP_VDPA_NAME_SIZE,
    > > "vp-vdpa[%s]-%d\n", pci_name(pdev), i);
    > > - irq = pci_irq_vector(pdev, i);
    > > + irq = pci_irq_vector(pdev, msix_vec);
    > > ret = devm_request_irq(&pdev->dev, irq,
    > > vp_vdpa_vq_handler,
    > > 0, vp_vdpa->vring[i].msix_name,
    > > @@ -185,21 +194,22 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
    > > "vp_vdpa: fail to request irq for vq %d\n", i);
    > > goto err;
    > > }
    > > - vp_modern_queue_vector(mdev, i, i);
    > > + vp_modern_queue_vector(mdev, i, msix_vec);
    > > vp_vdpa->vring[i].irq = irq;
    > > + msix_vec++;
    > > }
    > >
    > > snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE, "vp-vdpa[%s]-config\n",
    > > pci_name(pdev));
    > > - irq = pci_irq_vector(pdev, queues);
    > > + irq = pci_irq_vector(pdev, msix_vec);
    > > ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler, 0,
    > > vp_vdpa->msix_name, vp_vdpa);
    > > if (ret) {
    > > dev_err(&pdev->dev,
    > > - "vp_vdpa: fail to request irq for vq %d\n", i);
    > > + "vp_vdpa: fail to request irq for config: %d\n",
    > > + ret);
    > > goto err;
    > > }
    > > - vp_modern_config_vector(mdev, queues);
    > > + vp_modern_config_vector(mdev, msix_vec);
    > > vp_vdpa->config_irq = irq;
    > >
    > > return 0;
    > > --
    > > 2.43.0
    >


    \
     
     \ /
      Last update: 2024-05-27 17:57    [W:6.860 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site