lkml.org 
[lkml]   [2022]   [Mar]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v4 15/32] vfio: introduce KVM-owned IOMMU type
From
On 3/15/22 10:55 AM, Jason Gunthorpe wrote:
> On Tue, Mar 15, 2022 at 09:36:08AM -0400, Matthew Rosato wrote:
>>> If we do try to stick this into VFIO it should probably use the
>>> VFIO_TYPE1_NESTING_IOMMU instead - however, we would like to delete
>>> that flag entirely as it was never fully implemented, was never used,
>>> and isn't part of what we are proposing for IOMMU nesting on ARM
>>> anyhow. (So far I've found nobody to explain what the plan here was..)
>>>
>>
>> I'm open to suggestions on how better to tie this into vfio. The scenario
>> basically plays out that:
>
> Ideally I would like it to follow the same 'user space page table'
> design that Eric and Kevin are working on for HW iommu.

'[RFC v16 0/9] SMMUv3 Nested Stage Setup (IOMMU part)' ??

https://lore.kernel.org/linux-iommu/20211027104428.1059740-1-eric.auger@redhat.com/

>
> You have an 1st iommu_domain that maps and pins the entire guest physical
> address space.

Ahh, I see.

@Christian would it be OK to pursue a model that pins all of guest
memory upfront?

>
> You have an nested iommu_domain that represents the user page table
> (the ioat in your language I think)

Yes

>
> When the guest says it wants to set a user page table then you create
> the nested iommu_domain representing that user page table and pass in
> the anchor (guest address of the root IOPTE) to the kernel to do the
> work. >
> The rule for all other HW's is that the user space page table is
> translated by the top level kernel page table. So when you traverse it
> you fetch the CPU page storing the guest's IOPTE by doing an IOVA
> translation through the first level page table - not through KVM.
>
> Since the first level page table an the KVM GPA should be 1:1 this is
> an equivalent operation.
>
>> 1) the iommu will be domain_alloc'd once VFIO_SET_IOMMU is issued -- so at
>> that time (or earlier) we have to make the decision on whether to use the
>> standard IOMMU or this alternate KVM/nested IOMMU.
>
> So in terms of iommufd I would see it this would be an iommufd 'create
> a device specific iomm_domain' IOCTL and you can pass in a S390
> specific data blob to make it into this special mode.
>
>>> This is why I said the second level should be an explicit iommu_domain
>>> all on its own that is explicitly coupled to the KVM to read the page
>>> tables, if necessary.
>>
>> Maybe I misunderstood this. Are you proposing 2 layers of IOMMU that
>> interact with each other within host kernel space?
>>
>> A second level runs the guest tables, pins the appropriate pieces from the
>> guest to get the resulting phys_addr(s) which are then passed via iommu to a
>> first level via map (or unmap)?
>
>
> The first level iommu_domain has the 'type1' map and unmap and pins
> the pages. This is the 1:1 map with the GPA and ends up pinning all
> guest memory because the point is you don't want to take a memory pin
> on your performance path
>
> The second level iommu_domain points to a single IO page table in GPA
> and is created/destroyed whenever the guest traps to the hypervisor to
> manipulate the anchor (ie the GPA of the guest IO page table).
>

That makes sense, thanks for clarifying.

>>> But I'm not sure that reading the userspace io page tables with KVM is
>>> even the best thing to do - the iommu driver already has the pinned
>>> memory, it would be faster and more modular to traverse the io page
>>> tables through the pfns in the root iommu_domain than by having KVM do
>>> the translations. Lets see what Matthew says..
>>
>> OK, you lost me a bit here. And this may be associated with the above.
>>
>> So, what the current implementation is doing is reading the guest DMA tables
>> (which we must pin the first time we access them) and then map the PTEs of
>> the associated guest DMA entries into the associated host DMA table (so,
>> again pin and place the address, or unpin and invalidate). Basically we are
>> shadowing the first level DMA table as a copy of the second level DMA table
>> with the host address(es) of the pinned guest page(s).
>
> You can't pin/unpin in this path, there is no real way to handle error
> and ulimit stuff here, plus it is really slow. I didn't notice any of
> this in your patches, so what do you mean by 'pin' above?

patch 18 does some symbol_get for gfn_to_page (which will drive
hva_to_pfn under the covers) and kvm_release_pfn_dirty and uses those
symbols for pin/unpin.

pin/unpin errors in this series are reliant on the fact that RPCIT is
architected to include a panic response to the guest of 'mappings failed
for the specified range, go refresh your tables and make room', thus
allowing this to work for pageable guests.

Agreed this would be unnecessary if we've already mapped all of guest
memory via a 1st iommu domain.

>
> To be like other IOMMU nesting drivers the pages should already be
> pinned and stored in the 1st iommu_domain, lets say in an xarray. This
> xarray is populated by type1 map/unmap sytem calls like any
> iommu_domain.
>
> A nested iommu_domain should create the real HW IO page table and
> associate it with the real HW IOMMU and record the parent 1st level iommu_domain.
>
> When you do the shadowing you use the xarray of the 1st level
> iommu_domain to translate from GPA to host physical and there is no
> pinning/etc involved. After walking the guest table and learning the
> final vIOVA it is translated through the xarray to a CPU physical and
> then programmed into the real HW IO page table.
>
> There is no reason to use KVM to do any of this, and is actively wrong
> to place CPU pages from KVM into an IOPTE that did not come through
> the type1 map/unmap calls that do all the proper validation and
> accounting.
>
> Jason

\
 
 \ /
  Last update: 2022-03-15 17:06    [W:0.307 / U:0.548 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site