lkml.org 
[lkml]   [2019]   [Oct]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subjectx86: Is Multi-MSI support for PCIe working?
Date
Dear all

As the subject suggests, I'm wondering whether PCIe Multi-MSI support is working on x86 with IOMMU enabled.

The reason I'm asking is, that I couldn't find any device driver using multiple MSI. There are examples using either single MSI or multiple MSI-X, but no multiple MSI.
I'm trying to get a x1 PCIe card with a Lattice ECP5 FPGA working, which utilises 2 MSIs. Depending on whether the IOMMU is enabled or not, I'm able to allocate both desired
MSIs (with IOMMU enabled) or only one MSI (IOMMU disabled). I could happily live with the IOMMU disabled, but I can't allocate the second MSI then, which is required for the
function of the device: the first MSI is used to signal that a DMA transfer from the FPGA to the CPU has finished. The associated IRQ handler basically just wakes up an user mode task.
The second MSI is used as a shared IRQ for a number of 16750-compatible UARTs in the FPGA, mapped through a Wishbone bus to the PCIe BAR.
The first MSI works perfectly, but the second one causes one IOMMU page faults once per UART during probing of the UARTs, and one IOMMU page fault with every byte received through
any UART. I've been running out of ideas a while ago as the IOMMU subsystem is too complex to understand for me as a non-kernel developer. But I'd highly appreciate any hints from
more experienced developers. Could anybody provide me some pointers, please?

Best regards,

Patrick

--
Patrick Brunner

Stettbacher Signal Processing
Neugutstrasse 54
CH-8600 Duebendorf

Switzerland

Phone: +41 43 299 57 23
Mail: brunner@stettbacher.ch



\
 
 \ /
  Last update: 2019-10-31 15:37    [W:0.065 / U:0.676 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site