lkml.org 
[lkml]   [2020]   [Sep]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 08/16] irqchip/gic: Configure SGIs as standard interrupts
On 2020-09-16 16:46, Jon Hunter wrote:
> On 16/09/2020 16:10, Marc Zyngier wrote:
>> Hi Jon,
>>
>> +Linus, who is facing a similar issue.
>>
>> On 2020-09-16 15:16, Jon Hunter wrote:
>>> Hi Marc,
>>>
>>> On 14/09/2020 14:06, Marek Szyprowski wrote:
>>>> Hi Marc,
>>>>
>>>> On 01.09.2020 16:43, Marc Zyngier wrote:
>>>>> Change the way we deal with GIC SGIs by turning them into proper
>>>>> IRQs, and calling into the arch code to register the interrupt
>>>>> range
>>>>> instead of a callback.
>>>>>
>>>>> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
>>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>> This patch landed in linux next-20200914 as commit ac063232d4b0
>>>> ("irqchip/gic: Configure SGIs as standard interrupts"). Sadly it
>>>> breaks
>>>> booting of all Samsung Exynos 4210/4412 based boards (dual/quad ARM
>>>> Cortex A9 based). Here are the last lines from the bootlog:
>>>
>>> I am observing the same thing on several Tegra boards (both arm and
>>> arm64). Bisect is pointing to this commit. Reverting this alone does
>>> not
>>> appear to be enough to fix the issue.
>>
>> Right, I am just massively by the GICv3 spec, and failed to remember
>> that ye olde GIC exposes the source CPU in AIR *and* wants it back,
>> while
>> newer GICs deal with that transparently.
>>
>> Can you try the patch below and let me know?
>
> Yes will do.
>
>> @@ -365,14 +354,13 @@ static void __exception_irq_entry
>> gic_handle_irq(struct pt_regs *regs)
>>              smp_rmb();
>>
>>              /*
>> -             * Samsung's funky GIC encodes the source CPU in
>> -             * GICC_IAR, leading to the deactivation to fail if
>> -             * not written back as is to GICC_EOI.  Stash the
>> -             * INTID away for gic_eoi_irq() to write back.
>> -             * This only works because we don't nest SGIs...
>> +             * The GIC encodes the source CPU in GICC_IAR,
>> +             * leading to the deactivation to fail if not
>> +             * written back as is to GICC_EOI.  Stash the INTID
>> +             * away for gic_eoi_irq() to write back.  This only
>> +             * works because we don't nest SGIs...
>>               */
>> -            if (is_frankengic())
>> -                set_sgi_intid(irqstat);
>> +            this_cpu_write(sgi_intid, intid);
>
> I assume that it should be irqstat here and not intid?

Indeed. As you can tell, I haven't even tried to compile it, sorry about
that.

M.
--
Jazz is not dead. It just smells funny...

\
 
 \ /
  Last update: 2020-09-16 17:59    [W:0.617 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site