lkml.org 
[lkml]   [2000]   [Mar]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: DMA mapping
From
Hi.

On Tue, Mar 07, 2000 at 04:47:09AM -0800, David S. Miller wrote:
> I suppose you saw the recycling mechanism provided by the
> Microsoft NT driver interfaces and considered if such a
> scheme would benefit Linux as well?

I have not seen the NT driver though I did have a look through the
Netware driver at some point. The recycling mechanism is some
ATM/Linux specific thing that Werner mentions in his message. I did
actually test it at some point, it seemed to work, but I don't know if
it is safe.

> The map/unmap costs on x86 are NULL. On other machines there are
> cache (either at the CPU or at the PCI controller) coherency issues to
> deal with, so even if we recycled there would still be overhead at
> "DMA transfer done" time.

I assumed that was what pci_dma_sync was for.

[pool scheme for 2.5]

> But this is 2.5.x material at this point. The entry allocation
> algorithms used by ports which use dynamic DMA mappings are pretty
> efficient, and ways are being discovered to make them even faster.
> So it's not a critical issue.

> On x86 all of these operations are NOPs anyways if that is the
> platform whose performance you care about. :-)

It happens that the only serious requests for driver support I've had
are from some brave people building an Linux Alpha supercomputing
cluster over an ATM fabric using Madge adapters. I've also found a
vict^H^H^Holunteer to test the driver on sparc[64]. So I do care about
non-x86 platforms.

> I think if you really feel that this extra software driver state is a
> big deal, walk over your driver and try to discover ways to somehow
> eliminate at least one register or descriptor read or write in a hot
> code path. I bet the gains from that will far outweigh whatever
> performance burden you will suffer on x86 from keeping track of DMA
> and cpu addresses.

It's mainly memory usage, I'm looking at adding another 1 to 5k of
driver state (depending on queue lengths and pointer size) and some
free-list/pointer mangling.

The hot (latency-affecting?) part of the receive path is (roughly):

interrupt status check (one register read)
lock RXin Q
check status and get skb address from handle (read descriptor)
ATM layer checks (skb and vc stuff)
put_skb, push skb
update RXin Q pointers
unlock RXin Q

The hot part of the transmit path is a bit more complicated:

ATM layer checks
alloc TX frag list and fill it
make a TX descriptor (frag list ptr and skb handle from address)
lock TXin Q
copy (eliminated by inline?) TX descriptor into TXin Q
update TXin Q pointers
notify adapter (two register writes)
unlock TXin Q

I think the only significant way of improving this would be to
maintain a pre-allocated TX fragment list pool (the lists are
currently all of length 1 so it would be trivial).

Giuliano.
ps Q = queue = FIFO for anyone who wondered
--
mail: gprocida@madge.com / myxie@debian.org | public PGP key ID: 93898735
home: +44 181 351 1172 / Flat 5, 135 Palmerston Road, London N22 8RW, UK
work: +44 1753 661 305 / Madge Networks, Wexham Springs, Slough SL3 6PJ, UK

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:56    [W:1.293 / U:0.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site