lkml.org 
[lkml]   [2013]   [Dec]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] x86: Add check for number of available vectors before CPU down
From
On 12/19/13, Prarit Bhargava <prarit@redhat.com> wrote:
>
>
> On 12/03/2013 09:48 PM, rui wang wrote:
>> On 11/20/13, Prarit Bhargava <prarit@redhat.com> wrote:
>> Have you considered the case when an IRQ is destined to more than one CPU?
>> e.g.:
>>
>> bash# cat /proc/irq/89/smp_affinity_list
>> 30,62
>> bash#
>>
>> In this case offlining CPU30 does not seem to require an empty vector
>> slot. It seems that we only need to change the affinity mask of irq89.
>> Your check_vectors() assumed that each irq on the offlining cpu
>> requires a new vector slot.
>>
>
> Rui,
>
> The smp_affinity_list only indicates a preferred destination of the IRQ, not
> the
> *actual* location of the CPU. So the IRQ is on one of cpu 30 or 62 but not
> both
> simultaneously.
>

It depends on how IOAPIC (or MSI/MSIx) is configured. An IRQ can be
simultaneously broadcast to all destination CPUs (Fixed Mode) or
delivered to the CPU with the lowest priority task (Lowest Priority
Mode). It's programmed in the Delivery Mode bits of the IOAPIC's IO
Redirection table registers, or the Message Data Register in the case
of MSI/MSIx

> If the case is that 62 is being brought down, then the smp_affinity mask
> will be
> updated to reflect only cpu 30 (and vice versa).
>

Yes the affinity mask should be updated. But if it was destined to
more than one CPU, your "this_counter" does not seem to count the
right numbers. Are you saying that smp_affinity mask is broken on
Linux so that there's no way to configure an IRQ to target more than
one CPU?

Thanks
Rui

> P.
>


\
 
 \ /
  Last update: 2013-12-19 08:41    [W:3.338 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site