lkml.org 
[lkml]   [2017]   [Sep]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 02/13] xen/pvcalls: implement frontend disconnect
On Fri, 11 Aug 2017, Boris Ostrovsky wrote:
> On 07/31/2017 06:57 PM, Stefano Stabellini wrote:
> > Introduce a data structure named pvcalls_bedata. It contains pointers to
> > the command ring, the event channel, a list of active sockets and a list
> > of passive sockets. Lists accesses are protected by a spin_lock.
> >
> > Introduce a waitqueue to allow waiting for a response on commands sent
> > to the backend.
> >
> > Introduce an array of struct xen_pvcalls_response to store commands
> > responses.
> >
> > Implement pvcalls frontend removal function. Go through the list of
> > active and passive sockets and free them all, one at a time.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> > drivers/xen/pvcalls-front.c | 51 +++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 51 insertions(+)
> >
> > diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> > index a8d38c2..a126195 100644
> > --- a/drivers/xen/pvcalls-front.c
> > +++ b/drivers/xen/pvcalls-front.c
> > @@ -20,6 +20,29 @@
> > #include <xen/xenbus.h>
> > #include <xen/interface/io/pvcalls.h>
> >
> > +#define PVCALLS_INVALID_ID UINT_MAX
> > +#define PVCALLS_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> > +#define PVCALLS_NR_REQ_PER_RING __CONST_RING_SIZE(xen_pvcalls, XEN_PAGE_SIZE)
> > +
> > +struct pvcalls_bedata {
> > + struct xen_pvcalls_front_ring ring;
> > + grant_ref_t ref;
> > + int irq;
> > +
> > + struct list_head socket_mappings;
> > + struct list_head socketpass_mappings;
> > + spinlock_t pvcallss_lock;
>
> In the backend this is called socket_lock and (subjectively) it would
> sound as a better name here too.

I'll rename


> > +
> > + wait_queue_head_t inflight_req;
> > + struct xen_pvcalls_response rsp[PVCALLS_NR_REQ_PER_RING];
> > +};
> > +static struct xenbus_device *pvcalls_front_dev;
> > +
> > +static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id)
> > +{
> > + return IRQ_HANDLED;
> > +}
> > +
> > static const struct xenbus_device_id pvcalls_front_ids[] = {
> > { "pvcalls" },
> > { "" }
> > @@ -27,6 +50,34 @@
> >
> > static int pvcalls_front_remove(struct xenbus_device *dev)
> > {
> > + struct pvcalls_bedata *bedata;
> > + struct sock_mapping *map = NULL, *n;
> > +
> > + bedata = dev_get_drvdata(&pvcalls_front_dev->dev);
> > +
> > + list_for_each_entry_safe(map, n, &bedata->socket_mappings, list) {
> > + mutex_lock(&map->active.in_mutex);
> > + mutex_lock(&map->active.out_mutex);
> > + pvcalls_front_free_map(bedata, map);
> > + mutex_unlock(&map->active.out_mutex);
> > + mutex_unlock(&map->active.in_mutex);
> > + kfree(map);
>
> I think this is the same issue as the one discussed for some other patch
> --- unlocking and then immediately freeing a lock.

Yes, I'll fix this too.


> > + }
> > + list_for_each_entry_safe(map, n, &bedata->socketpass_mappings, list) {
> > + spin_lock(&bedata->pvcallss_lock);
> > + list_del_init(&map->list);
> > + spin_unlock(&bedata->pvcallss_lock);
> > + kfree(map);
> > + }
> > + if (bedata->irq > 0)
> > + unbind_from_irqhandler(bedata->irq, dev);
> > + if (bedata->ref >= 0)
> > + gnttab_end_foreign_access(bedata->ref, 0, 0);
> > + kfree(bedata->ring.sring);
> > + kfree(bedata);
> > + dev_set_drvdata(&dev->dev, NULL);
> > + xenbus_switch_state(dev, XenbusStateClosed);
>
> Should we first move the state to Closed and then free things up? Or it
> doesn't matter?

I believe that is already done by the xenbus driver: this function is
supposed to be called after the frontend state is set to Closing.


> > + pvcalls_front_dev = NULL;
> > return 0;
> > }
> >
>

\
 
 \ /
  Last update: 2017-09-09 02:08    [W:0.472 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site