lkml.org 
[lkml]   [2016]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/5] Threaded MSI interrupt for VFIO PCI device
On Wed, Dec 16, 2015 at 12:15:23PM -0700, Alex Williamson wrote:
> On Wed, 2015-12-16 at 18:56 +0100, Paolo Bonzini wrote:
> > Alex,
> >
> > can you take a look at the extension to the irq bypass interface in
> > patch 2?  I'm not sure I understand what is the case where you have
> > multiple consumers for the same token.
>
> The consumers would be, for instance, Intel PI + the threaded handler
> added in this series.  These run independently, the PI bypass simply
> makes the interrupt disappear from the host when it catches it, but if
> the vCPU isn't running in the right place at the time of the interrupt,
> it gets delivered to the host, in which case the secondary consumer
> implementing handle_irq() provides a lower latency injection than the

Sorry for slow response.

If the PI is delivered to the host because guest is not running, I think it
will not trigger the secondary consumer. The reason is, with PI, the
interrupt will be delivered as the POSTED_INTR_VECTOR or
POSTED_INTR_WAKEUP_VECTOR. So for the PI consumer will not be invoked on run
time scenario.

> eventfd path.  If PI isn't supported, only this latter consumer is
> registered.
>
> On the surface it seems like a reasonable solution, though having
> multiple consumers implementing handle_irq() seems problematic.  Do we

Yes, agree that has multiple consumers implementing handle_irq() seems not
good. But I do think it can be helpful. A naive case is, a consumer can be
created to log all the interrupt event, or to create a pipe for analysis.

> get multiple injections if we call them all?  Should we have some way

As discussed above, currently I think we have only one consumer to
handle_irq(), so it should be ok? We can limit the framework to support only
one consumer with handle_irq()?

> to prioritize one handler versus another?  Perhaps KVM should have a
> single unified consumer that can provide that sort of logic, though we
I'd think still different consumer for the PI and this fast_IRQ.

Thanks
--jyh

> still need the srcu code added here to protect against registration and
> irq_handler() races.  Thanks,
>
> Alex
>
> > On 03/12/2015 19:22, Yunhong Jiang wrote:
> > > When assigning a VFIO device to a KVM guest with low latency
> > > requirement, it  
> > > is better to handle the interrupt in the hard interrupt context, to
> > > reduce
> > > the context switch to/from the IRQ thread.
> > >
> > > Based on discussion on https://lkml.org/lkml/2015/10/26/764, the
> > > VFIO msi
> > > interrupt is changed to use request_threaded_irq(). The primary
> > > interrupt
> > > handler tries to set the guest interrupt atomically. If it fails to
> > > achieve
> > > it, a threaded interrupt handler will be invoked.
> > >
> > > The irq_bypass manager is extended for this purpose. The KVM
> > > eventfd will
> > > provide a irqbypass consumer to handle the interrupt at hard
> > > interrupt
> > > context. The producer will invoke the consumer's handler then.
> > >
> > > Yunhong Jiang (5):
> > >   Extract the irqfd_wakeup_pollin/irqfd_wakeup_pollup
> > >   Support runtime irq_bypass consumer
> > >   Support threaded interrupt handling on VFIO
> > >   Add the irq handling consumer
> > >   Expose x86 kvm_arch_set_irq_inatomic()
> > >
> > >  arch/x86/kvm/Kconfig              |   1 +
> > >  drivers/vfio/pci/vfio_pci_intrs.c |  39 ++++++++++--
> > >  include/linux/irqbypass.h         |   8 +++
> > >  include/linux/kvm_host.h          |  19 +++++-
> > >  include/linux/kvm_irqfd.h         |   1 +
> > >  virt/kvm/Kconfig                  |   3 +
> > >  virt/kvm/eventfd.c                | 131
> > > ++++++++++++++++++++++++++------------
> > >  virt/lib/irqbypass.c              |  82 ++++++++++++++++++------
> > >  8 files changed, 214 insertions(+), 70 deletions(-)
> > >
>


\
 
 \ /
  Last update: 2016-01-06 09:01    [W:0.064 / U:26.392 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site