lkml.org 
[lkml]   [2020]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host
    Date
    > From: Tian, Kevin <kevin.tian@intel.com>
    > Sent: Thursday, April 2, 2020 10:12 AM
    > To: Liu, Yi L <yi.l.liu@intel.com>; alex.williamson@redhat.com;
    > Subject: RE: [PATCH v1 6/8] vfio/type1: Bind guest page tables to host
    >
    > > From: Liu, Yi L <yi.l.liu@intel.com>
    > > Sent: Wednesday, April 1, 2020 5:13 PM
    > >
    > > > From: Tian, Kevin <kevin.tian@intel.com>
    > > > Sent: Monday, March 30, 2020 8:46 PM
    > > > Subject: RE: [PATCH v1 6/8] vfio/type1: Bind guest page tables to
    > > > host
    > > >
    > > > > From: Liu, Yi L <yi.l.liu@intel.com>
    > > > > Sent: Sunday, March 22, 2020 8:32 PM
    > > > >
    > > > > From: Liu Yi L <yi.l.liu@intel.com>
    > > > >
    > > > > VFIO_TYPE1_NESTING_IOMMU is an IOMMU type which is backed by
    > > hardware
    > > > > IOMMUs that have nesting DMA translation (a.k.a dual stage address
    > > > > translation). For such hardware IOMMUs, there are two
    > > > > stages/levels of address translation, and software may let
    > > > > userspace/VM to own the
    > > > > first-
    > > > > level/stage-1 translation structures. Example of such usage is
    > > > > vSVA ( virtual Shared Virtual Addressing). VM owns the
    > > > > first-level/stage-1 translation structures and bind the structures
    > > > > to host, then hardware IOMMU would utilize nesting translation
    > > > > when doing DMA translation fo the devices behind such hardware IOMMU.
    > > > >
    > > > > This patch adds vfio support for binding guest translation (a.k.a
    > > > > stage 1) structure to host iommu. And for
    > > > > VFIO_TYPE1_NESTING_IOMMU, not only bind guest page table is
    > > > > needed, it also requires to expose interface to guest for iommu
    > > > > cache invalidation when guest modified the first-level/stage-1
    > > > > translation structures since hardware needs to be notified to
    > > > > flush stale iotlbs. This would be introduced in next patch.
    > > > >
    > > > > In this patch, guest page table bind and unbind are done by using
    > > > > flags VFIO_IOMMU_BIND_GUEST_PGTBL and
    > > > VFIO_IOMMU_UNBIND_GUEST_PGTBL
    > > > > under IOCTL VFIO_IOMMU_BIND, the bind/unbind data are conveyed by
    > > > > struct iommu_gpasid_bind_data. Before binding guest page table to
    > > > > host, VM should have got a PASID allocated by host via
    > > > > VFIO_IOMMU_PASID_REQUEST.
    > > > >
    > > > > Bind guest translation structures (here is guest page table) to
    > > > > host
    > > >
    > > > Bind -> Binding
    > > got it.
    > > > > are the first step to setup vSVA (Virtual Shared Virtual Addressing).
    > > >
    > > > are -> is. and you already explained vSVA earlier.
    > > oh yes, it is.
    > > > >
    > > > > Cc: Kevin Tian <kevin.tian@intel.com>
    > > > > CC: Jacob Pan <jacob.jun.pan@linux.intel.com>
    > > > > Cc: Alex Williamson <alex.williamson@redhat.com>
    > > > > Cc: Eric Auger <eric.auger@redhat.com>
    > > > > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
    > > > > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.com>
    > > > > Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
    > > > > Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
    > > > > ---
    > > > > drivers/vfio/vfio_iommu_type1.c | 158
    > > > > ++++++++++++++++++++++++++++++++++++++++
    > > > > include/uapi/linux/vfio.h | 46 ++++++++++++
    > > > > 2 files changed, 204 insertions(+)
    > > > >
    > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c
    > > > > b/drivers/vfio/vfio_iommu_type1.c index 82a9e0b..a877747 100644
    > > > > --- a/drivers/vfio/vfio_iommu_type1.c
    > > > > +++ b/drivers/vfio/vfio_iommu_type1.c
    > > > > @@ -130,6 +130,33 @@ struct vfio_regions {
    > > > > #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
    > > > > (!list_empty(&iommu->domain_list))
    > > > >
    > > > > +struct domain_capsule {
    > > > > + struct iommu_domain *domain;
    > > > > + void *data;
    > > > > +};
    > > > > +
    > > > > +/* iommu->lock must be held */
    > > > > +static int vfio_iommu_for_each_dev(struct vfio_iommu *iommu,
    > > > > + int (*fn)(struct device *dev, void *data),
    > > > > + void *data)
    > > > > +{
    > > > > + struct domain_capsule dc = {.data = data};
    > > > > + struct vfio_domain *d;
    > > > > + struct vfio_group *g;
    > > > > + int ret = 0;
    > > > > +
    > > > > + list_for_each_entry(d, &iommu->domain_list, next) {
    > > > > + dc.domain = d->domain;
    > > > > + list_for_each_entry(g, &d->group_list, next) {
    > > > > + ret = iommu_group_for_each_dev(g->iommu_group,
    > > > > + &dc, fn);
    > > > > + if (ret)
    > > > > + break;
    > > > > + }
    > > > > + }
    > > > > + return ret;
    > > > > +}
    > > > > +
    > > > > static int put_pfn(unsigned long pfn, int prot);
    > > > >
    > > > > /*
    > > > > @@ -2314,6 +2341,88 @@ static int
    > > > > vfio_iommu_info_add_nesting_cap(struct vfio_iommu *iommu,
    > > > > return 0;
    > > > > }
    > > > >
    > > > > +static int vfio_bind_gpasid_fn(struct device *dev, void *data) {
    > > > > + struct domain_capsule *dc = (struct domain_capsule *)data;
    > > > > + struct iommu_gpasid_bind_data *gbind_data =
    > > > > + (struct iommu_gpasid_bind_data *) dc->data;
    > > > > +
    > > >
    > > > In Jacob's vSVA iommu series, [PATCH 06/11]:
    > > >
    > > > + /* REVISIT: upper layer/VFIO can track host process that bind
    > > the
    > > > PASID.
    > > > + * ioasid_set = mm might be sufficient for vfio to check pasid
    > > VMM
    > > > + * ownership.
    > > > + */
    > > >
    > > > I asked him who exactly should be responsible for tracking the pasid
    > > ownership.
    > > > Although no response yet, I expect vfio/iommu can have a clear
    > > > policy and
    > > also
    > > > documented here to provide consistent message.
    > >
    > > yep.
    > >
    > > > > + return iommu_sva_bind_gpasid(dc->domain, dev, gbind_data); }
    > > > > +
    > > > > +static int vfio_unbind_gpasid_fn(struct device *dev, void *data) {
    > > > > + struct domain_capsule *dc = (struct domain_capsule *)data;
    > > > > + struct iommu_gpasid_bind_data *gbind_data =
    > > > > + (struct iommu_gpasid_bind_data *) dc->data;
    > > > > +
    > > > > + return iommu_sva_unbind_gpasid(dc->domain, dev,
    > > > > + gbind_data->hpasid);
    > > >
    > > > curious why we have to share the same bind_data structure between
    > > > bind
    > > and
    > > > unbind, especially when unbind requires only one field? I didn't see
    > > > a clear
    > > reason,
    > > > and just similar to earlier ALLOC/FREE which don't share structure either.
    > > > Current way simply wastes space for unbind operation...
    > >
    > > no special reason today. But the gIOVA support over nested translation
    > > is in plan, it may require a flag to indicate it as guest iommu driver
    > > may user a single PASID value(RID2PASID) for all devices in guest side.
    > > Especially if the RID2PASID value used for IOVA the the same with host
    > > side. So adding a flag to indicate the binding is for IOVA is helpful.
    > > For PF/VF, iommu driver just bind with the host side's RID2PASID.
    > > While for ADI (Assignable Device Interface), vfio layer needs to
    > > figure out the default PASID stored in the aux-domain, and then iommu
    > > driver bind gIOVA table to the default PASID. The potential flag is
    > > required in both bind and unbind path. As such, it would be better to share the
    > structure.
    >
    > I'm fine with it if you are pretty sure that more extension will be required in the
    > future, though I didn't fully understand above explanations. 😊
    >
    > >
    > > > > +}
    > > > > +
    > > > > +/**
    > > > > + * Unbind specific gpasid, caller of this function requires hold
    > > > > + * vfio_iommu->lock
    > > > > + */
    > > > > +static long vfio_iommu_type1_do_guest_unbind(struct vfio_iommu
    > > > > *iommu,
    > > > > + struct iommu_gpasid_bind_data *gbind_data)
    > > {
    > > > > + return vfio_iommu_for_each_dev(iommu,
    > > > > + vfio_unbind_gpasid_fn, gbind_data); }
    > > > > +
    > > > > +static long vfio_iommu_type1_bind_gpasid(struct vfio_iommu *iommu,
    > > > > + struct iommu_gpasid_bind_data *gbind_data)
    > > {
    > > > > + int ret = 0;
    > > > > +
    > > > > + mutex_lock(&iommu->lock);
    > > > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
    > > > > + ret = -EINVAL;
    > > > > + goto out_unlock;
    > > > > + }
    > > > > +
    > > > > + ret = vfio_iommu_for_each_dev(iommu,
    > > > > + vfio_bind_gpasid_fn, gbind_data);
    > > > > + /*
    > > > > + * If bind failed, it may not be a total failure. Some devices
    > > > > + * within the iommu group may have bind successfully. Although
    > > > > + * we don't enable pasid capability for non-singletion iommu
    > > > > + * groups, a unbind operation would be helpful to ensure no
    > > > > + * partial binding for an iommu group.
    > > > > + */
    > > > > + if (ret)
    > > > > + /*
    > > > > + * Undo all binds that already succeeded, no need to
    > > >
    > > > binds -> bindings
    > > got it.
    > > >
    > > > > + * check the return value here since some device within
    > > > > + * the group has no successful bind when coming to this
    > > > > + * place switch.
    > > > > + */
    > > >
    > > > remove 'switch'
    > > oh, yes.
    > >
    > > >
    > > > > + vfio_iommu_type1_do_guest_unbind(iommu, gbind_data);
    > > > > +
    > > > > +out_unlock:
    > > > > + mutex_unlock(&iommu->lock);
    > > > > + return ret;
    > > > > +}
    > > > > +
    > > > > +static long vfio_iommu_type1_unbind_gpasid(struct vfio_iommu
    > > *iommu,
    > > > > + struct iommu_gpasid_bind_data *gbind_data)
    > > {
    > > > > + int ret = 0;
    > > > > +
    > > > > + mutex_lock(&iommu->lock);
    > > > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
    > > > > + ret = -EINVAL;
    > > > > + goto out_unlock;
    > > > > + }
    > > > > +
    > > > > + ret = vfio_iommu_type1_do_guest_unbind(iommu, gbind_data);
    > > > > +
    > > > > +out_unlock:
    > > > > + mutex_unlock(&iommu->lock);
    > > > > + return ret;
    > > > > +}
    > > > > +
    > > > > static long vfio_iommu_type1_ioctl(void *iommu_data,
    > > > > unsigned int cmd, unsigned long arg) { @@
    > > -
    > > > 2471,6 +2580,55 @@
    > > > > static long vfio_iommu_type1_ioctl(void *iommu_data,
    > > > > default:
    > > > > return -EINVAL;
    > > > > }
    > > > > +
    > > > > + } else if (cmd == VFIO_IOMMU_BIND) {
    > > >
    > > > BIND what? VFIO_IOMMU_BIND_PASID sounds clearer to me.
    > >
    > > Emm, it's up to the flags to indicate bind what. It was proposed to
    > > cover the three cases below:
    > > a) BIND/UNBIND_GPASID
    > > b) BIND/UNBIND_GPASID_TABLE
    > > c) BIND/UNBIND_PROCESS
    > > <only a) is covered in this patch>
    > > So it's called VFIO_IOMMU_BIND.
    >
    > but aren't they all about PASID related binding?

    yeah, I can rename it. :-)

    Regards,
    Yi Liu

    \
     
     \ /
      Last update: 2020-04-02 10:06    [W:2.693 / U:0.228 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site