lkml.org 
[lkml]   [1999]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: FTP benchmark proposal

[having written probably the fastest linux ftp server there is, lemme
throw my $.04 into the mix :)]

On Sun, 27 Jun 1999, Larry McVoy wrote:

> did the math, and came up with the rather impressive number of 16.8MB/sec
> sustained over a 24 hour period. That's really bloody amazing. I have
> to hand to the FreeBSD guys, that's an impressive number. I'm not
> positive that Linux can do that but I sure as heck want to find out.

realize that the freebsd guys (david in particular) have done some nice
hacking to get their box spewing that efficiently. They have a home
grown lightweight ftp server that is quite nice. Its not as Cool (tm) as
hftpd, but it certainly gets the job done ;) :). They've also added
sendfile() to their kernel and have almost-zero-copy networking which
greatly reduced their cpu load.

We have the server. It uses around 20k user mem / connection, plus the
usual kernel overhead per thread. It does ls caching which turns the
common ls into a few syscalls and a sendfile(). No problems here.

But the kernel has to be hacked to support it. We'd need to apply ingo's
nice threading fixes that let us have many more threads, and stephen's
large fd set patches so that this monster threaded server can have lots of
files open. No problems, just a matter of doing it :)

So our kernel won't be _as_ tuned to do this work, but 2.3/4 will do very
nicely indeed and further kernels will do even better once we get the
kiobuf zero copy stuff sorted out.

> a) get VA or Penguin to sponsor the hardware and the test lab,
> I'll talk to them and try and make that happen.
> b) create a test lab with enough clients to generate the load
> with a setup like so:

the zd labs guys already have this hardware and networking gear setup.
I'll phone them, there is a remote chance they might be interested.

a word on hardware: we _definitely_ want to use a gigabit rather than a
pile of 100mb cards. gigabit cards are cheapish when compared to the
drives and memory and such in the machine, and aggregate lots of traffic
infinitely better than a pile of 100mb cards. (the exception being the
clever new adaptec 6915 100mb chips, but our driver doesn't seem to use
all its clever features yet)

> c) then create a workload that moves the data and see what happens.
> Either I'll do this or I'll work with someone to do this. I have a
> lame start at it already.

realize that serving 6000 connections over 100mb (or even 10mb) is vastly
different than serving 6000 connections that are all limited by their
modems. the connections are completed much more rapidly, and in fact
you'll have to play games with the server config so that you don't run out
of sockets on the server machine as they sit around in TIME_WAIT. I run
into this all the time stress testing hftpd.

> Any interest in this? Seems like it would be a cool setup for this and
> for http test loads. And once we get it going, it might be a nice setup
> to challenge microsloth with...

If you do go through with it, let me know. We'll get hftpd setup and
spewing files like nothing else ;)

-- zach

- - - - - -
007 373 5963


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.070 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site