lkml.org 
[lkml]   [2016]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] genirq/msi: Make sure PCI MSIs are activated early
On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> On Tue, 26 Jul 2016, Thomas Gleixner wrote:
> > On Mon, 25 Jul 2016, Bjorn Helgaas wrote:
> > > On Mon, Jul 25, 2016 at 09:45:13AM +0200, Thomas Gleixner wrote:
> > > I thought the original issue [1] was that PCI_MSI_FLAGS_ENABLE was being
> > > written before PCI_MSI_ADDRESS_LO. That doesn't sound like a good
> > > idea to me.
> >
> > Well. That's only a problem if the PCI device does not support masking. But
> > yes, we missed that case back then.
> >
> > > That does seem like a problem. Maybe it would be better to delay
> > > setting PCI_MSI_FLAGS_ENABLE until after the MSI address & data bits
> > > have been set?
> >
> > I thought about that, but that gets ugly pretty fast. Here is an alternative
> > solution.
> >
> > I think that's the proper place to do it _AFTER_ the hierarchical allocation
> > took place. On x86 Marc's ACTIVATE_EARLY flag would not work because the
> > message is not yet ready to be assembled.
>
> Actually it works, because the MSI domain is the last one which is running the
> allocation function. So everything else is initialized already.
>
> I'll take Marc's patch with some additional commentry as it turned out to be a
> workaround for the reported VMware issues with PCI/MSI-X pass through.

Now I digged a little bit deeper into all that PCI/MSI maze.

When a interrupt is freed, then we write the msi message to 0, but the
PCI_MSI_FLAGS_ENABLE flag is still set. That makes me wonder ...

Thanks,

tglx

\
 
 \ /
  Last update: 2016-07-26 17:01    [W:0.141 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site