lkml.org 
[lkml]   [2015]   [Dec]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports
From
Date
On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
> On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> > > mistakenly added wrong email-id of alex, looping his correct one.
> > >
> > > On 29 December 2015 at 21:23, Santosh Shukla <santosh.shukla@lina
> > > ro.org> wrote:
> > > > On 29 December 2015 at 18:58, Arnd Bergmann <arnd@arndb.de>
> > > > wrote:
> > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> > > > > > On 23 December 2015 at 03:26, Arnd Bergmann <arnd@arndb.de>
> > > > > > wrote:
> > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
> > > > > > > > }
> > > > > > > >
> > > > > > > > So I care for /dev/ioport types interface who could do
> > > > > > > > more than byte
> > > > > > > > data copy to/from user-space. I tested this patch with
> > > > > > > > little
> > > > > > > > modification and could able to run pmd driver for
> > > > > > > > arm/arm64 case.
> > > > > > > >
> > > > > > > > Like to know how to address pci_io region mapping
> > > > > > > > problem for
> > > > > > > > arm/arm64, in-case /dev/ioports approach is not
> > > > > > > > acceptable or else I
> > > > > > > > can spent time on restructuring the patch?
> > > > > > > >
> > > > > > >
> > > > > > > For the use case you describe, can't you use the vfio
> > > > > > > framework to
> > > > > > > access the PCI BARs?
> > > > > > >
> > > > > >
> > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
> > > > > > vfio_pci_map() and
> > > > > > it look to me that it only maps ioresource_mem pci region,
> > > > > > pasting
> > > > > > code snap:
> > > > > >
> > > > > > if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> > > > > > return -EINVAL;
> > > > > > ....
> > > > > >
> > > > > > and I want to map ioresource_io pci region for arm platform
> > > > > > in my
> > > > > > use-case. Not sure vfio maps pci_iobar region?
> > > > >
> > > > > Mapping I/O BARs is not portable, notably it doesn't work on
> > > > > x86.
> > > > >
> > > > > You should be able access them using the read/write interface
> > > > > on
> > > > > the vfio device.
> > > > >
> > > > Right, x86 doesn't care as iopl() could give userspace
> > > > application
> > > > direct access to ioports.
> > > >
> > > > Also, Alex in other dpdk thread [1] suggested someone to
> > > > propose io
> > > > bar mapping in vfio-pci, I guess in particular to non-x86 arch
> > > > so I
> > > > started working on it.
> > > >
> > >
> >
> > So what's wrong with just using the existing read/write API on all
> > architectures?
> >
>
> nothing wrong, infact read/write api will still be used so to access
> mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't

vfio_pci_mmap(), the read/write accessors fully support i/o port.

> map io pci bar in particular (i.e.. ioresource_io) so I guess need to
> add that bar mapping in vfio. pl. correct me if i misunderstood
> anything.

Maybe I misunderstood what you were asking for, it seemed like you
specifically wanted to be able to mmap i/o port space, which is
possible, just not something we can do on x86.  Maybe I should have
asked why.  The vfio API already supports read/write access to i/o port
space, so if you intend to mmap it only to use read/write on top of the
mmap, I suppose you might see some performance improvement, but not
really any new functionality.  You'd also need to deal with page size
issues since i/o port ranges are generally quite a bit smaller than the
host page size and they'd need to be mapped such that each devices does
not share a host page of i/o port space with other devices.  On x86 i/o
port space is mostly considered legacy and not a performance critical
path for most modern devices; PCI SR-IOV specifically excludes i/o port
space.  So what performance gains do you expect to see in being able to
mmap i/o port space and what hardware are you dealing with that relies
on i/o port space rather than mmio for performance?  Thanks,

Alex


\
 
 \ /
  Last update: 2015-12-29 19:01    [W:0.242 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site