lkml.org 
[lkml]   [2009]   [Apr]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: tracking of PCI address space
Date

On Apr 8, 2009, at 4:49 PM, Ira Snyder wrote:

> On Wed, Apr 08, 2009 at 03:53:55PM -0500, Kumar Gala wrote:
>> I was wondering if we have anything that tracks regions associated
>> with
>> the "inbound" side of a pci_bus.
>>
>> What I mean is on embedded PPC we have window/mapping registers for
>> both
>> inbound (accessing memory on the SoC) and outbound (access PCI device
>> MMIO, IO etc). The combination of the inbound & outbound convey what
>> exists in the PCI address space vs CPU physical address space (and
>> how to
>> map from one to the other). Today in the PPC land we only attach
>> outbound windows to the pci_bus. So technically the inbound side
>> information (like what subset of physical memory is visible on the
>> PCI
>> bus) seems to be lost.
>>
>
> To the best of my knowledge there is no API to set inbound windows in
> Linux. I've been implementing a virtio-over-PCI driver which needs the
> inbound windows. I set them up myself during driver probe, using
> get_immrbase() to get the IMMR registers. This board is a PCI Slave /
> Agent, it doesn't even have PCI support compiled into the kernel.

I'm not concerned explicitly about setting up inbound windows, its
more about have a consistent view of the PCI address space which may
be different from the CPU physical address space.

I'm working on code to actually setup the inbound windows on 85xx/86xx
class devices (based on dma-ranges property in the device tree). As I
was thinking about this I realized that the send of ranges/dma-ranges
in the .dts and what we map to outbound vs inbound changes if we an
agent or host.

- k


\
 
 \ /
  Last update: 2009-04-08 23:57    [W:0.059 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site