lkml.org 
[lkml]   [2008]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 0/3] fix GART to respect device's dma_mask about virtual mappings
From
On Wed, 17 Sep 2008 02:24:04 +0200
Andi Kleen <andi@firstfloor.org> wrote:

> On Wed, Sep 17, 2008 at 08:53:42AM +0900, FUJITA Tomonori wrote:
> > On Tue, 16 Sep 2008 19:58:24 +0200
> > Andi Kleen <andi@firstfloor.org> wrote:
> >
> > > > > Those always are handled elsewhere in the block layer (using the bounce_pfn
> > > > > mechanism)
> > > >
> > > > I don't think that the bounce guarantees that dma_alloc_coherent()
> > > > returns an address that a device can access to.
> > >
> > > dma_alloc_coherent() is not used for block IO data. And dma_alloc_coherent()
> > > does handle masks > 24bit < 32bits just fine.
> >
> > What do you mean? For example, some aacraid cards have 31bit dma
> > mask. What guarantees that IOMMUs's dma_alloc_coherent don't return a
> > virtual address > 31bit < 32bit?
>
> At least the old IOMMU implementations (GART, non GART) handled this
> by falling back to GFP_DMA. I haven't checked if that didn't get broken
> in the recent reorganization, but if it got it should be fixed of course.
> But hopefully it still works.

The falling back mechanism was moved to pci-nommu from the common code
since it doesn't work for other IOMMUs that always need virtual
mappings. Calgary needs this dma_mask trick too but I guess that it's
unlikely that the IBM servers with Calgary have weird hardware.


\
 
 \ /
  Last update: 2008-09-17 21:23    [W:0.123 / U:2.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site