lkml.org 
[lkml]   [2015]   [Nov]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] PCI: dra7xx: mark dra7xx_pcie_msi irq as IRQF_NO_THREAD
From
Date
On 11/12/2015 11:19 AM, Sebastian Andrzej Siewior wrote:
> On 11/06/2015 08:59 PM, Grygorii Strashko wrote:
>> Hi Sebastian,
>
> Hi Grygorii,
>
>> - IRQF_NO_THREAD is the first considered option for such kind of issues.
>> But: Now in LKML there are ~60 occurrences of IRQF_NO_THREAD - most of
>> them are used by Arch code. And It's only used by 6 drivers (drivers/*) [Addendum 2].
>> During past year, I've found only two threads related to IRQF_NO_THREAD
>> and, in both cases, IRQF_NO_THREAD was added for arch specific IRQs which
>> can't be threaded (https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-November/122659.html,
>> https://lkml.org/lkml/2015/4/21/404).
>
> That powerpc patch you reference is doing the same thing you are doing
> here.

Probably. I don't know this hw, so my assumption was based on commits descriptions.

>
>> - ARM UP system: TI's am437xx SoCs for example.
>> Here everything is started from /drivers/irqchip/irq-gic.c -> gic_handle_irq()
>>
>
>> GIC IRQ handler gic_handle_irq() may process more than one IRQ without leaving HW IRQ mode
>> (during my experiments I saw up to 6 IRQs processed in one cycle).
>
> not only GIC. But then what good does it do if it leaves and returns
> immediately back to the routine?
>
>> As result, It was concluded, that having such current HW/SW and all IRQs forced threaded [1],
>> it will be potentially possible to predict system behavior, because gic_handle_irq() will
>> do the same things for most of processed IRQs.
>> But, once there will be chained [2] or IRQF_NO_THREAD [3] IRQs - complete unpredictability.
>
> I would not go as far as "complete unpredictability". What you do (or
> should do) is testing the system for longer period of time with
> different behavior in order to estimate the worst case.
> You can't predict the system anyway since it is way too complex. Just
> try something that ensures that cyclictest is no longer cache hot and
> see what happens then.

I understand that. That's the current plan and work is in progress.
The nearest target is to get rid of all -RT specific backtracks and
ensure TI -RT kernel supports the same functionality as non-RT.
next step - try to optimize.

>
>> So, It was selected as goal to have all PPI IRQs (forced) threaded. And if someone
>> will require faster IRQ handling - IRQF_NO_THREAD can be always added, but it will
>> be custom solution then.
>>
>> I'd be appreciated for your comments - if above problem is not a problem.
>> Good - IRQF_NO_THREAD forever!
>
> Yes, we try to avoid IRQF_NO_THREAD under all circumstances. However it
> is required for low-level arch code. This includes basically
> everything that is using raw-locks which includes interrupt controller
> (the "real" ones like GIC or cascading like MSI or GPIO).
> Here it is simple - you have a cascading MSI-interrupt controller and
> as such it should be IRQF_NO_THREAD marked.
> The latency spikes in worst case are not huge as explained earlier: The
> only thing your cascading controller is allowed to do is to mark
> interrupt as pending (which is with threaded interrupts just a task
> wakeup).
> And this is not a -RT only problem: it is broken in vanilla linux with
> threaded interrupts as well.
>

Ok. I've got it. IRQF_NO_THREAD will be a solution for reference code and
for issues like this. I understand, that each, -RT based, real solution is
unique and need to be specifically tuned, so if someone will have problem
with IRQF_NO_THREAD - it can be removed easily and replaced with any sort of
custom hacks/improvements.

Thanks a lot for your comments.
I'll apply your previous comments and re-send.

--
regards,
-grygorii


\
 
 \ /
  Last update: 2015-11-13 20:21    [W:0.107 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site