lkml.org 
[lkml]   [2019]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH 1/1] iommu/vt-d: Fix dmar pte read access not set error
From
Date
Hi,

On 12/12/19 12:35 AM, Jerry Snitselaar wrote:
> On Wed Dec 11 19, Lu Baolu wrote:
>> If the default DMA domain of a group doesn't fit a device, it
>> will still sit in the group but use a private identity domain.
>> When map/unmap/iova_to_phys come through iommu API, the driver
>> should still serve them, otherwise, other devices in the same
>> group will be impacted. Since identity domain has been mapped
>> with the whole available memory space and RMRRs, we don't need
>> to worry about the impact on it.
>>
>
> Does this pose any potential issues with the reverse case where the
> group has a default identity domain, and the first device fits that,
> but a later device in the group needs dma and gets a private dma
> domain?

No. iommu_map/unmap() should not be called for default identity domain.

if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
return -EINVAL;

Best regards,
baolu

>
>> Link: https://www.spinics.net/lists/iommu/msg40416.html
>> Cc: Jerry Snitselaar <jsnitsel@redhat.com>
>> Reported-by: Jerry Snitselaar <jsnitsel@redhat.com>
>> Fixes: 942067f1b6b97 ("iommu/vt-d: Identify default domains replaced
>> with private")
>> Cc: stable@vger.kernel.org # v5.3+
>> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
>> ---
>> drivers/iommu/intel-iommu.c | 8 --------
>> 1 file changed, 8 deletions(-)
>>
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 0c8d81f56a30..b73bebea9148 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -5478,9 +5478,6 @@ static int intel_iommu_map(struct iommu_domain
>> *domain,
>>     int prot = 0;
>>     int ret;
>>
>> -    if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>> -        return -EINVAL;
>> -
>>     if (iommu_prot & IOMMU_READ)
>>         prot |= DMA_PTE_READ;
>>     if (iommu_prot & IOMMU_WRITE)
>> @@ -5523,8 +5520,6 @@ static size_t intel_iommu_unmap(struct
>> iommu_domain *domain,
>>     /* Cope with horrid API which requires us to unmap more than the
>>        size argument if it happens to be a large-page mapping. */
>>     BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level));
>> -    if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>> -        return 0;
>>
>>     if (size < VTD_PAGE_SIZE << level_to_offset_bits(level))
>>         size = VTD_PAGE_SIZE << level_to_offset_bits(level);
>> @@ -5556,9 +5551,6 @@ static phys_addr_t
>> intel_iommu_iova_to_phys(struct iommu_domain *domain,
>>     int level = 0;
>>     u64 phys = 0;
>>
>> -    if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>> -        return 0;
>> -
>>     pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level);
>>     if (pte)
>>         phys = dma_pte_addr(pte);
>> --
>> 2.17.1
>>
>

\
 
 \ /
  Last update: 2019-12-12 02:02    [W:0.070 / U:1.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site