lkml.org 
[lkml]   [2018]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC PATCH 10/12] soc: qcom: ipa: data path
On Wed, Nov 14, 2018 at 7:31 PM Alex Elder <elder@linaro.org> wrote:
>
> On 11/7/18 8:55 AM, Arnd Bergmann wrote:
> > On Wed, Nov 7, 2018 at 1:33 AM Alex Elder <elder@linaro.org> wrote:
> >>
> >> This patch contains "ipa_dp.c", which includes the bulk of the data
> >> path code. There is an overview in the code of how things operate,
> >> but there are already plans to rework this portion of the driver.
> >>
> >> In particular:
> >> - Interrupt handling will be replaced with a threaded interrupt
> >> handler. Currently handling occurs in a combination of
> >> interrupt and workqueue context, and this requires locking
> >> and atomic operations for proper synchronization.
> >
> > You probably don't want to use just a threaded IRQ handler to
> > start the poll function, that would still require an extra indirection.
>
> That's a really good point. However I think that the path I'll
> take to *getting* to scheduling the poll in interrupt context
> will use a threaded interrupt handler. I'm hoping that will
> allow me to simplify the code in steps.
>
> The main reason for this split between working in interrupt
> context when possible, but pushing to a workqueue when not, is
> to allow IPA clock(s) to be turned off. Enabling the clocks
> is a blocking operation, so can't' be done in the top half
> interrupt handler. The thought was it would be best to work
> in interrupt context--if the clock was already active--but
> to defer to a workqueue to turn the clock on if necessary.
>
> The result requires locking and duplication of code that I
> find to be pretty confusing--and hard to reason about. I
> have been planning to re-do things to be better suited to
> NAPI, and knowing that, I haven't given the data path as
> much attention as some of the rest.

Right, that sounds like a good plan: start making it use a
threaded IRQ handler first to clean up the code, and then
think about optimizing the NAPI wakeup once that works
reliably.

I think what you can do here eventually is to have
a combined threaded/non-threaded IRQ handler, where
the threaded handler can do everything it needs to do,
and the non-threaded handler does only one thing:
if all conditions are met for entering the NAPI handler
(waiting for rx/tx IRQ, clocks enabled, ...) we call
napi_schedule(), otherwise defer to the threaded handler.

Arnd

\
 
 \ /
  Last update: 2018-11-15 15:49    [W:0.075 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site