lkml.org 
[lkml]   [2008]   [May]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: Question about interrupt routing and irq allocation
Jeremy Fitzhardinge <jeremy@goop.org> writes:

> Eric W. Biederman wrote:
>> - I think using create_irq is a good step.
>> - I think all vectors are wasted in the case of Xen.
>>
>
> The case I'm discussing now is in hvm domains - ie, fully virtualized PC
> platform. I'm adding a driver to poke a hole through all the emulated hardware
> to get directly to the underlying Xen layer so that we can run paravirtual
> drivers to get better performance. Only the irqs associated with pv drivers will
> waste their vectors.

I see. The fully virtualized machine case. So we do have apics
visible to us.

>> - I think we want a individual irq for each xen irq source.
>> Sparc already does a demux in similar circumstances with
>> a queue of received MSI messages an a single cpu irq
>> that these get demuxed from.
>> If we don't have individual irqs per drivers it will be hard
>> to share a source base with native drivers.
>>
>
> In this case the sharing is between fully paravirtualized paravirt_ops Xen and
> pv-on-hvm drivers. In general I want those drivers to look as normal as
> possible, so they should use irqs in a normal way.

Right. We should be able to assume that the native irqs for
those devices are not shared, and we should be able to extend
that property (among others) to the virtualzed irqs for the
devices.

Under other hypervisors sparc, ppc we can run unmodified pci
drivers just the OS platform code changes. How close to that
can we come in the Xen case?

I think running unmodified drivers with the OS platform code doing
the adaption should be the goal, unless there is a real need for
the driver to know about Xen. Is that compatible with what you
are trying to achieve?

>> - I think it would be very nice if we could get irqs allocated
>> in request_irq instead of create_irq (and equivalents).
>>
>
> Something along the lines of passing -1 as the irq, and it would return the
> allocated irq? It's not clear to me how all that would fit together.

Groan. I mispoke. I meant:
- I think it would be very nice if we could get vectors allocated
in request_irq instead of in create_irq (and equivalents).

Just delayed vector allocation. I wasn't after something driver
visible.

>> - I think ultimately it makes sense to port the per vector
>> code to 32bit linux. On single cpu systems the cost should
>> be just a hair more code, but no extra data structures. We
>> can easily restrict the irq allocation to allocating the same
>> vector on all cpus for any old machines that prove flaky with
>> irq migration.
>>
>> The code between the two architectures we kept fairly close
>> in sync when I worked on it so a merge should not be a big deal.
>
> Well, if I find myself at a loose end, I'll have a look at it.

Thanks.

Eric



\
 
 \ /
  Last update: 2008-05-28 18:13    [W:0.043 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site