lkml.org 
[lkml]   [2006]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Van Jacobson's net channels and real-time
linux-os (Dick Johnson) wrote:
> On Mon, 24 Apr 2006, Auke Kok wrote:
>
>> Ingo Oeser wrote:
>>> On Saturday, 22. April 2006 15:49, Jörn Engel wrote:
>>>> That was another main point, yes. And the endpoints should be as
>>>> little burden on the bottlenecks as possible. One bottleneck is the
>>>> receive interrupt, which shouldn't wait for cachelines from other cpus
>>>> too much.
>>> Thats right. This will be made a non issue with early demuxing
>>> on the NIC and MSI (or was it MSI-X?) which will select
>>> the right CPU based on hardware channels.
>> MSI-X. with MSI you still have only one cpu handling all MSI interrupts and
>> that doesn't look any different than ordinary interrupts. MSI-X will allow
>> much better interrupt handling across several cpu's.
>>
>> Auke
>> -
>
> Message signaled interrupts are just a kudge to save a trace on a
> PC board (read make junk cheaper still).

yes. Also in PCI-Express there is no physical interrupt line anymore due to
the architecture, so even classical interrupts are sent as "message" over the bus.

> They are not faster and may even be slower.

thus in the case of PCI-Express, MSI interrupts are just as fast as the
ordinary ones. I have no numbers on whether MSI is faster or not then e.g.
interrupts on PCI-X, but generally speaking, the PCI-Express bus is not
designed to be "low latency" at all, at best it gives you X latency, where X
is something like microseconds. The MSI message itself only takes 10-20
nanoseconds though, but all the handling probably adds a large factor to that
(1000 or so). No clue on classical interrupt line latency - anyone?

> They will not be the salvation of any interrupt latency problems.

This is also not the problem - we really don't care that our 100.000 packets
arrive 20usec slower per packet, just as long as the bus is not idle for those
intervals. We would care a lot if 25.000 of those arrive directly at the
proper CPU, without the need for one of the cpu's to arbitrate on every
interrupt. That's the idea anyway.

Nowadays with irq throttling we introduce a lot of designed latency anyway,
especially with network devices.

> The solutions for increasing networking speed,
> where the bit-rate on the wire gets close to the bit-rate on the
> bus, is to put more and more of the networking code inside the
> network board. The CPU get interrupted after most things (like
> network handshakes) are complete.

That is a limited vision of the situation. You could argue that the current
CPU's have so much power that they can easily do a lot of the processing
instead of the hardware, and thus warm caches for userspace, setup sockets
etc. This is the whole idea of Van Jacobsen's net channels. Putting more
offloading into the hardware just brings so much problems with itself, that
are just far easier solved in the OS.


Cheers,

Auke
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-04-25 03:52    [W:0.146 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site