lkml.org 
[lkml]   [2023]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH 1/6] PCI: hv: fix a race condition bug in hv_pci_query_relations()
Date

> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-
> hyperv.c
> index f33370b75628..b82c7cde19e6 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -3308,6 +3308,19 @@ static int hv_pci_query_relations(struct hv_device
> *hdev)
> if (!ret)
> ret = wait_for_response(hdev, &comp);
>
> + /*
> + * In the case of fast device addition/removal, it's possible that
> + * vmbus_sendpacket() or wait_for_response() returns -ENODEV but
> we
> + * already got a PCI_BUS_RELATIONS* message from the host and the
> + * channel callback already scheduled a work to hbus->wq, which can
> be
> + * running survey_child_resources() -> complete(&hbus-
> >survey_event),
> + * even after hv_pci_query_relations() exits and the stack variable
> + * 'comp' is no longer valid. This can cause a strange hang issue
> + * or sometimes a page fault. Flush hbus->wq before we exit from
> + * hv_pci_query_relations() to avoid the issues.
> + */
> + flush_workqueue(hbus->wq);

Is it possible for PCI_BUS_RELATIONS to be scheduled arrive after calling flush_workqueue(hbus->wq)?

> +
> return ret;
> }
>
> --
> 2.25.1

\
 
 \ /
  Last update: 2023-03-28 18:50    [W:0.283 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site