lkml.org 
[lkml]   [2003]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: e1000 performance hack for ppc64 (Power4)
From
Date

Look folks, we run 40 to 48 GigE adapters in a p690 32 way on AIX and
they basically all run at full speed so let me se you try that on most of
these other boxes you are talking about. Same adapter, same hardware
logic.
I have also seen what many of these other boxes you talk about do when data
or structures are not aligned on 64 bit boundaries.
The PPC HW does not have those 64bit alignment issues. So, each machine
has some warts. Have yet to see a perfect one.

If you want a lot of PCI adapters on a box, it takes a number of bridge
chips and other IO links to do that.
Memory controllers like to deal with cache lines.
For larger packets, like jumbo frames or large send (TSO), the few added
DMA's is not an issue as the packets are so large the DMA soon get aligned
and are not an issue. With TSO being the default, the small packet case
becomes less important anyway. Its more an issue on 2.4 where TSO is not
provided. We also want this to run well if someone does not want to use
TSO.

Its only the MTU 1500 case with non-TSO that we are discussing here so
copying a few bytes is really not a big deal as the data is already in
cache from copying into kernel. If it lets the adapter run at speed, thats
what customers want and what we need.
Granted, if the HW could deal with this we would not have to, but thats not
the case today so I want to spend a few CPU cycles to get best performance.
Again, if this is not done on other platforms, I don't understand why you
care.

If we have to do this for PPC port, fine. I have not seen any of you
suggest a better solution that works and will not be a worse hack to TCP or
other code. Anton tried various other ideas before we fell back to doing
it the same way we did this in AIX. This code is very localized and is
only used by platforms that need it. Thus I don't see the big issue here.

Herman


"David S. Miller" <davem@redhat.com> on 06/14/2003 01:08:50 AM

To: ltd@cisco.com
cc: anton@samba.org, haveblue@us.ltcfwd.linux.ibm.com, Herman
Dierks/Austin/IBM@IBMUS, scott.feldman@intel.com, dwg@au1.ibm.com,
linux-kernel@vger.kernel.org, Nancy J Milliner/Austin/IBM@IBMUS,
Ricardo C Gonzalez/Austin/IBM@ibmus, Brian
Twichell/Austin/IBM@IBMUS, netdev@oss.sgi.com
Subject: Re: e1000 performance hack for ppc64 (Power4)



From: Lincoln Dale <ltd@cisco.com>
Date: Sat, 14 Jun 2003 15:52:35 +1000

can we have the TCP retransmit side take a performance hit if it needs
to
realign buffers?

You don't understand, the person who mangles the packet
must make the copy, not the person not doing the packet
modifications.

for a "high performance app" requiring gigabit-type speeds,

...we probably won't be using ppc64 and e1000 cards, yes, I agree
:-)

Anton, go to the local computer store and pick up some tg3
cards or a bunch of Taiwan specials :-)




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:36    [W:0.039 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site