lkml.org 
[lkml]   [1997]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: reflections on the 3c905 v0.40 driver
On Apr 27, Chen Shiyuan wrote:

> Though I have no answer or comments on your reflections, I wondering if
> you can tell me what program did you use to check out the network transfer
> speed and where I can find it.

I've been asked several times now about 'ntest' or how I got those numbers,
so I'm sending it to the list now...

'ntest' actually is only a small shell script sending data from
/dev/zero to /dev/null using 'rsh'. the most simple form would be

rsh remotehost "cat > /dev/null" < /dev/zero
and
rsh remotehost "cat /dev/zero" > /dev/null

and then using e.g. 'netstat -c -i eth0'

ntest1 and ntest2 (attached below) use a slightly modified version
of 'buffer' which I've called 'mybuffer' to display transfer rates
continously. call these scripts with 2 or 3 arguments:

ntest2 from-host to-host
ntest2 from-host to-host 10M

where the 3rd argument gives the amount of data transfered between
two timing reports (default: 1M).

output looks like this:

39942K 38.1724 0.9530 1071470 1100278
41013K 39.1454 0.9729 1072855 1077729
42021K 40.0379 0.8925 1074719 1174828

where the numbers reported are:

1) sum if data transfered (in Kbytes, K=1024)
2) total time in seconds
3) time for the last "block" transfered (here 1MB)
4) mean tranfer rate for the whole run in bytes/sec ($1 / $2)
5) tranfer rate for the last block (blocksize / $3)

using this 'mybuffer' you can do lots of interesting timing and performance
tests getting dynamic data e.g.

mybyffer -S 10M -s 64k < /dev/zero > /dev/null

(and then varying process load or running programms which trash L1/L2 cache
and or main RAM) or test I/O rates or media transfer rates for a whole disk
(because for most modern disks using zone bit recording, metia transfer rate
isn't constant)

mybuffer -S 10M -s 64k < /dev/sda > /dev/null


for "good" networks with no problems at all, 'ntest2' seems to give
higher transfer rates while for network problems 'ntest1' usually
showes better rates. I didn't understand yet what's the important difference here.
maybe someone else can explain when playing with it...

fuer "gute" netze ohne probleme zeigt 'ntest2' die besseren werte an,
fuer "schlechte" netze ist oft 'ntest1' besser. darum, habe ich bislang
leider nicht verstanden :-(


Harald
--
All SCSI disks will from now on ___ _____
be required to send an email notice 0--,| /OOOOOOO\
24 hours prior to complete hardware failure! <_/ / /OOOOOOOOOOO\
\ \/OOOOOOOOOOOOOOO\
\ OOOOOOOOOOOOOOOOO|//
Harald Koenig, \/\/\/\/\/\/\/\/\/
Inst.f.Theoret.Astrophysik // / \\ \
koenig@tat.physik.uni-tuebingen.de ^^^^^ ^^^^^
#!/bin/bash


if [ -z "$1" -o -z "$2" ] ; then
echo ""
echo "usage: $0 from_host to_host [bsize]"
echo ""
exit
fi

bsize=${3:-1M}

rsh -n $2 "
rsh -n $1 'dd bs=63k if=/dev/zero' | \
sh -c ' \
if [ -x /usr/local/bin/mybuffer ] ; then \
/usr/local/bin/mybuffer -S $bsize -s 63k ; \
elif [ -x /usr/local/bin/buffer ] ; then \
/usr/local/bin/buffer -S $bsize -s 63k ; \
else \
buffer -S $bsize -s 63k ; \
fi > /dev/null ' "
#!/bin/bash

if [ -z "$1" -o -z "$2" ] ; then
echo ""
echo "usage: $0 from_host to_host [bsize]"
echo ""
exit
fi

bsize=${3:-1M}

rsh -n $1 "
sh -c ' \
if [ -x /usr/local/bin/mybuffer ] ; then \
/usr/local/bin/mybuffer -S $bsize -s 63k ; \
elif [ -x /usr/local/bin/buffer ] ; then \
/usr/local/bin/buffer -S $bsize -s 63k ; \
else \
buffer -S $bsize -s 63k ; \
fi < /dev/zero ' | \
rsh $2 'dd bs=63k of=/dev/null' "
[unhandled content-type:application/x-gunzip]
\
 
 \ /
  Last update: 2005-03-22 13:39    [W:0.033 / U:0.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site