lkml.org 
[lkml]   [2019]   [Dec]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH RFC 0/1] Threaded handler uses irq affinity for when the interrupt is managed
Date
Hi,

As mentioned in [0], we are experiencing a scenario where data throughput
can be limited by the fact a CPU can be fully consumed in handling the
hard and threaded interrupt parts for a managed interrupt, while it can
help throughput by allowing another CPU to handle the threaded part. That
same link also includes some CPU load figures further in the discussion
for the same change in the patch here.

As some more background, in cbf8699996a6 ("genirq: Let irq thread follow
the effective hard irq affinity"), the change was made to enforce that the
threaded and hard parts should be kept on the same CPU, on the basis we
should not allow the threaded part stray from the CPU of the hard handler.

As in [0], again, Thomas said that it could be optional on whether we
allow the full irq affinity mask to be used. What that option is based on,
I am not sure.

Ming Lei said it would be sensible to do it when the interrupt is managed,
so that is the basis of this change.

Aside this this, it is worth noting that there has been another discussion
on CPU lockup from relentless handling of hard interrupts [2]. Using
threaded interrupts was discussed but seemingly rejected due to too much
context switching hitting performance. And so it seems that the conclusion
in that discussion was to use IRQ polling, but I have seen no recent
update.

[0] https://lore.kernel.org/lkml/e0e9478e-62a5-ca24-3b12-58f7d056383e@huawei.com/
[1] https://lore.kernel.org/lkml/CACVXFVPCiTU0mtXKS0fyMccPXN6hAdZNHv6y-f8-tz=FE=BV=g@mail.gmail.com/

John Garry (1):
genirq: Make threaded handler use irq affinity for managed interrupt

kernel/irq/manage.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

--
2.17.1

\
 
 \ /
  Last update: 2019-12-06 15:39    [W:0.089 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site