lkml.org 
[lkml]   [2021]   [Feb]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] vdpa/mlx5: Avoid unnecessary query virtqueue
From
Date

On 2021/2/2 下午12:15, Si-Wei Liu wrote:
> On Mon, Feb 1, 2021 at 7:13 PM Jason Wang <jasowang@redhat.com> wrote:
>>
>> On 2021/2/2 上午3:17, Si-Wei Liu wrote:
>>> On Mon, Feb 1, 2021 at 10:51 AM Si-Wei Liu <siwliu.kernel@gmail.com> wrote:
>>>> On Thu, Jan 28, 2021 at 5:46 AM Eli Cohen <elic@nvidia.com> wrote:
>>>>> suspend_vq should only suspend the VQ on not save the current available
>>>>> index. This is done when a change of map occurs when the driver calls
>>>>> save_channel_info().
>>>> Hmmm, suspend_vq() is also called by teardown_vq(), the latter of
>>>> which doesn't save the available index as save_channel_info() doesn't
>>>> get called in that path at all. How does it handle the case that
>>>> aget_vq_state() is called from userspace (e.g. QEMU) while the
>>>> hardware VQ object was torn down, but userspace still wants to access
>>>> the queue index?
>>>>
>>>> Refer to https://lore.kernel.org/netdev/1601583511-15138-1-git-send-email-si-wei.liu@oracle.com/
>>>>
>>>> vhost VQ 0 ring restore failed: -1: Resource temporarily unavailable (11)
>>>> vhost VQ 1 ring restore failed: -1: Resource temporarily unavailable (11)
>>>>
>>>> QEMU will complain with the above warning while VM is being rebooted
>>>> or shut down.
>>>>
>>>> Looks to me either the kernel driver should cover this requirement, or
>>>> the userspace has to bear the burden in saving the index and not call
>>>> into kernel if VQ is destroyed.
>>> Actually, the userspace doesn't have the insights whether virt queue
>>> will be destroyed if just changing the device status via set_status().
>>> Looking at other vdpa driver in tree i.e. ifcvf it doesn't behave like
>>> so. Hence this still looks to me to be Mellanox specifics and
>>> mlx5_vdpa implementation detail that shouldn't expose to userspace.
>>
>> So I think we can simply drop this patch?
> Yep, I think so. To be honest I don't know why it has anything to do
> with the memory hotplug issue.


Eli may know more, my understanding is that, during memory hotplut, qemu
need to propagate new memory mappings via set_map(). For mellanox, it
means it needs to rebuild memory keys, so the virtqueue needs to be
suspended.

Thanks


>
> -Siwei
>
>> Thanks
>>
>>
>>>> -Siwei
>>>>
>>>>
>>>>> Signed-off-by: Eli Cohen <elic@nvidia.com>
>>>>> ---
>>>>> drivers/vdpa/mlx5/net/mlx5_vnet.c | 8 --------
>>>>> 1 file changed, 8 deletions(-)
>>>>>
>>>>> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>>> index 88dde3455bfd..549ded074ff3 100644
>>>>> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>>> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
>>>>> @@ -1148,8 +1148,6 @@ static int setup_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq)
>>>>>
>>>>> static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq)
>>>>> {
>>>>> - struct mlx5_virtq_attr attr;
>>>>> -
>>>>> if (!mvq->initialized)
>>>>> return;
>>>>>
>>>>> @@ -1158,12 +1156,6 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
>>>>>
>>>>> if (modify_virtqueue(ndev, mvq, MLX5_VIRTIO_NET_Q_OBJECT_STATE_SUSPEND))
>>>>> mlx5_vdpa_warn(&ndev->mvdev, "modify to suspend failed\n");
>>>>> -
>>>>> - if (query_virtqueue(ndev, mvq, &attr)) {
>>>>> - mlx5_vdpa_warn(&ndev->mvdev, "failed to query virtqueue\n");
>>>>> - return;
>>>>> - }
>>>>> - mvq->avail_idx = attr.available_index;
>>>>> }
>>>>>
>>>>> static void suspend_vqs(struct mlx5_vdpa_net *ndev)
>>>>> --
>>>>> 2.29.2
>>>>>

\
 
 \ /
  Last update: 2021-02-02 07:05    [W:0.109 / U:0.684 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site