lkml.org 
[lkml]   [2017]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 15/16] iommu: introduce page response function
On Thu, 7 Dec 2017 12:56:55 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> On 06/12/17 19:25, Jacob Pan wrote:
> [...]
> >> For SMMUv3, the stall buffer may be shared between devices on
> >> some implementations, in which case the guest could prevent other
> >> devices to stall by letting the buffer fill up.
> >> -> We might have to keep track of stalls in the host driver and
> >> set a credit or timeout to each stall, if it comes to that.
> >> -> In addition, send a terminate-all-stalls command when
> >> changing the device's domain.
> >>
> > We have the same situation in VT-d with shared queue which in turn
> > may affect other guests. Letting host driver maintain record of
> > pending page request seems the best way to go. VT-d has a way to
> > drain PRQ per PASID and RID combination. I guess this is the same
> > as your "terminate-all-stalls" but with finer control? Or
> > "terminate-all-stalls" only applies to a given device.
>
> That command terminates all stalls for a given device (for all
> PASIDs). It's a bit awkward to implement but should be enough to
> ensure that we don't leak any outstanding faults to the next VM.
>
OK, in any case, I think this terminate request should come from the
drivers or vfio not initiated by IOMMU.
> > Seems we can implement a generic timeout/credit mechanism in IOMMU
> > driver with model specific action to drain/terminate. The timeout
> > value can also be model specific.
>
> Sounds good. Timeout seems a bit complicated to implement (and how do
> we guess what timeout would work?), so maybe it's simpler to enforce
> a quota of outstanding faults per VM, for example half of the shared
> queue size (the number can be chosen by the IOMMU driver). If a VM
> has that many outstanding faults, then any new fault is immediately
> terminated by the host. A bit rough but it might be enough to
> mitigate the problem initially, and we can always tweak it later (for
> instance disable faulting if a guest doesn't ever reply).
>
I have to make a correction/clarification, even though vt-d has a per
iommu shared queue for prq but we do not stall. Ashok reminded me that.
So there is no constraint on IOMMU if one of the guests does not
respond. All the pressure is on the device which may have limited # of
pending PR.

> Seems like VFIO should enforce this quota, since the IOMMU layer
> doesn't know which device is assigned to which VM. If it's the IOMMU
> that enforces quotas per device and a VM has 15 devices assigned,
> then the guest can still DoS the IOMMU.
>
I still think timeout makes more sense than quota in that a VM could
be under quota but failed to respond to one of the devices forever.
I agree it is hard to devise a good timeout limit but since this is to
prevent rare faults, we could pick a relatively large timeout. And we
only tracks the longest pending timeout per device. The error condition
we try to prevent is not necessarily only stall buffer overflow but
timeout also, right?
> [...]
> >>> + * @type: group or stream response
> >>
> >> The page request doesn't provide this information
> >>
> > this is vt-d specific. it is in the vt-d page request descriptor and
> > response descriptors are different depending on the type.
> > Since we intend the generic data to be super set of models, I add
> > this field.
>
> But don't you need to add the stream type to enum iommu_fault_type, in
> patch 8? Otherwise the guest can't know what type to set in the
> response.
>
> Thanks,
> Jean
>

[Jacob Pan]

\
 
 \ /
  Last update: 2017-12-08 02:17    [W:0.130 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site