lkml.org 
[lkml]   [2011]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/4] Intel pci: Limit dmar_init_reserved_ranges
* Chris Wright (chrisw@sous-sol.org) wrote:
> > Mike Travis wrote:
> > > Region 1: Memory at f8200000000 (64-bit, prefetchable) [size=256M]
> > > Region 3: Memory at 90000000 (64-bit, non-prefetchable) [size=32M]
> > >
> > > So this 44bit MMIO address 0xf8200000000 ends up in the rbtree. As DMA
> > > maps get added and deleted from the rbtree we can end up getting a cached
> > > entry to this 0xf8200000000 entry... this is what results in the code
> > > handing out the invalid DMA map of 0xf81fffff000:
> > >
> > > [ 0xf8200000000-1 >> PAGE_SIZE << PAGE_SIZE ]
> > >
> > > The IOVA code needs to better honor the "limit_pfn" when allocating
> > > these maps.
>
> This means we could get the MMIO address range (it's no longer reserved).
> It seems to me the DMA transaction would then become a peer to peer
> transaction if ACS is not enabled, which could show up as random register
> write in that GPUs 256M BAR (i.e. broken).
>
> The iova allocation should not hand out an address bigger than the
> dma_mask. What is the device's dma_mask?

Ah, looks like this is a bad interaction with the way the cached entry
is handled. I think the iova lookup should skip down the the limit_pfn
rather than assume that rb_last's pfn_lo/hi is ok just because it's in
the tree. Because you'll never hit the limit_pfn == 32bit_pfn case, it
just goes straight to rb_last in __get_cached_rbnode.


\
 
 \ /
  Last update: 2011-04-01 01:41    [W:0.060 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site