lkml.org 
[lkml]   [1999]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: kernel thread support - LWP's
Larry McVoy wrote:
> : Hi Larry, there is someone in our group at CERN also working on user
> : level threads. His measurements (benchmarks in L1 cache of course) are
> : 0.05 microseconds for context switch in user space.
>
> Interesting. I just coded up a little benchmark that shows .05 usecs
> is 2x what a procedure call costs on a 400Mhz Celeron. Kinda makes
> me wonder exactly what sort of "context" he is saving and restoring.
> I kind of doubt he's saving/restoring everything, like floating point
> registers, etc. But whatever.

No he's not saving/restoring much. An advantage of user-space context
is it is rarely as big as all the registers -- you only have to save
what the compiler says to save at any point. Only applies when the
switch is co-operative (but there are ways to mix that with
pre-emptive).

> : Now you can say that a real app will swamp this with cache misses. But
> : when it's within the cache, ~2-3 microseconds kernel vs. 0.05
> : microseconds user is a pretty severe difference.
>
> Really? I doubt it. I understand the need I just think he's going about it
> wrong. If what you want is low latency packet transfers, the fastest way
> is no context switch at all. The device should place the data in memory and
> you should be sitting there waiting for it.

That's very close to what happens. Calculations are taking place, but
polling code is inserted (by hand now, by the compiler soon) so it
branches when something interesting happens. The branch points
conveniently tend not to have much context to save.

> : Now you're generalising... the system here responds to events entirely
> : in user space.
>
> Not really. The device generating the packets runs kernel code, does it
> not?

No it doesn't. It's entirely in user space -- which makes it an unusual
example.

> : I kinda agree that polling a device using modified-compiler generated
> : code does not look like the right way at first... but this model is the
> : only one I know of where an Intel box can saturate a Gigabit Ethernet
> : link in both directions at once, with 6% CPU load and consistently <50
> : microseconds response latency (min. 25 microseconds).
>
> If you are running TCP/IP, I'm very impressed. If not, I'm not. If you
> are just blasting and receiving ethernet packets, so what?

It's just raw ethernet but it does carry a multiplexed data stream with
different streams waking up different threads.

It is relevant -- don't compare with in-kernel TCP/IP, compare with
in-kernel AF_PACKET.

A big event processing application will run on this if it works, so it's
a "real world" thing as it were.

Someone else has managed TCP/IP at 83MByte/sec which is not quite
saturating the link and uses all the CPU. Never mind eh? :)

> Again, if you are comparing apples to apples, you have a fantastic
> point and I want to learn more and I'll happily eat my words in public.
> But if you are comparing TCP/IP performance with raw packet performance,
> that's like comparing a Geo with a Ferrari. Not exactly meaningful.

No I'm comparing user-space raw ethernet + demultiplexing with kernel
raw ethernet + demultiplexing.

-- Jamie

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:0.235 / U:0.916 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site