[lkml]   [2003]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: problems achieving decent throughput with latency.
    David S. Miller wrote:
    > From: Ben Greear <>
    > Date: Mon, 03 Feb 2003 10:03:48 -0800
    > Also, if it's as simple as allocating a few more buffers for tcp, maybe we
    > should consider defaulting to higher in the normal kernel? (I'm not suggesting
    > **my** numbers..)
    > The current values are the only "safe" defaults. Here "safe" means
    > that if you have thousands of web connections, clients cannot force
    > the serve to queue large amounts of traffic per socket.
    > The attack goes something like: Open N thousand connections to
    > server, ask for large static object, do not ACK any of the data
    > packets. Server must thus hold onto N thousnad * maximum socket
    > write buffer bytes amount of memory.

    Why would it use the maximum socket for a connection with low to
    no acks, ie low to no throughput? Seems like the connection would
    have to scale up to full speed/sliding-window, which would require the DoS guy
    to have large receive bandwidth, and also enough precision to stop acking
    as soon as the window gets big (but before the object download has
    completed.) This does not seem like a great DoS to me.

    On my system, the default memory seems to be about 80k (docs say it
    is based on how much memory I have (128MB)). How big can N get?
    If N is 10k I can be DOS'd for only 800k?

    Ben Greear <> <Ben_Greear AT>
    President of Candela Technologies Inc

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:32    [W:0.031 / U:0.808 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site