lkml.org 
[lkml]   [2008]   [Sep]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Proposed SDIO layer rework
On Fri, 05 Sep 2008 13:45:55 +0200
Christer Weinigel <christer@weinigel.se> wrote:

> First some measurements. I've connected the SDIO bus to an Agilent
> scope to get some pretty plots.

Why doesn't santa bring me any of the nice toys everyone else seems to
be getting :(

> Since more and more high speed chips are being connected to embedded
> devices using SDIO, I belive that to get good SDIO performance, the SDIO
> layer has to be changed not to use a worker thread. This is
> unfortunately rather painful since it means that we have to add
> asynchronous variants of all the I/O operations, and the sdio_irq thread
> has to be totally redone, and the sdio IRQ enabling and disablin turned
> out to be a bit tricky.
>
> So, do you think that it is worth it to make such a major change to the
> SDIO subsystem? I belive so, but I guess I'm a bit biased. I can clean
> up the work I've done and make sure that everything is backwards
> compatible so that existing SDIO function drivers keep working (my
> current hack to the sdio_irq thread breaks existing drivers, and it is
> too ugly to live anyway so I don't even want to show it to you yet), and
> add a demo driver which shows how to use the asynchronous functions.

The latency improvement is indeed impressive, but I am not convinced it
is worth it. An asynchronous API is much more complex and difficult to
work with (not to mention reading and trying to make sense of existing
code), and SDIO is not even an asynchronous bus to begin with.

Also, as the latency is caused by CPU cycles, the problem will be
reduced as hardware improved.

But...

> But if you don't like this idea at all, I'll probably just keep it as a
> patch and not bother as much with backwards compatibility. This is a
> lot more work for me though, and I'd much prefer to get it into the
> official kernel.

I do like the idea of reducing latencies (and generally improving the
performance of the MMC stack). The primary reason I haven't done
anything myself is lack of stuff to test and proper instrumentation.

There are really two issues here, which aren't necessarily related;
actual interrupt latency and command completion latency.

The main culprit in your case is the command completion one. Perhaps
there is some other way of solving that inside the MMC core? E.g. we
could spin instead of sleeping while we wait for the request to
complete. Most drivers never use process context to handle a request
anyway. We would need to determine when the request is small enough
(all non-busy, non-data commands?) and that the CPU is slow enough. I
also saw something about a new trigger interface that could make this
efficient.

Does this sound like something worth exploring?

Rgds
--
-- Pierre Ossman

Linux kernel, MMC maintainer http://www.kernel.org
rdesktop, core developer http://www.rdesktop.org

WARNING: This correspondence is being monitored by the
Swedish government. Make sure your server uses encryption
for SMTP traffic and consider using PGP for end-to-end
encryption.
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2008-09-05 16:53    [W:0.059 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site