lkml.org 
[lkml]   [2011]   [Sep]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Subject[PATCH v2 0/7] per-cgroup tcp buffer pressure settings
    Date
    This patch introduces per-cgroup tcp buffers limitation. This allows
    sysadmins to specify a maximum amount of kernel memory that
    tcp connections can use at any point in time. TCP is the main interest
    in this work, but extending it to other protocols would be easy.

    For this to work, I am hooking it into memcg, after the introdution of
    an extension for tracking and controlling objects in kernel memory.
    Since they are usually not found in page granularity, and are fundamentally
    different from userspace memory (not swappable, can't overcommit), they
    need their special place inside the Memory Controller.

    Right now, the kmem extension is quite basic, and just lays down the
    basic infrastucture for the ongoing work.

    Although it does not account kernel memory allocated - I preferred to
    keep this series simple and leave accounting to the slab allocations when
    they arrive.

    What it does is to piggyback in the memory control mechanism already present in
    /proc/sys/net/ipv4/tcp_mem. There is a soft limit, and a hard limit,
    that will suppress allocation when reached. For each cgroup, however,
    the file kmem.tcp_maxmem will be used to cap those values.

    The usage I have in mind here is containers. Each container will
    define its own values for soft and hard limits, but none of them will
    be possibly bigger than the value the box' sysadmin specified from
    the outside.

    To test for any performance impacts of this patch, I used netperf's
    TCP_RR benchmark on localhost, so we can have both recv and snd in action.

    Command line used was ./src/netperf -t TCP_RR -H localhost, and the
    results:

    Without the patch
    =================

    Socket Size Request Resp. Elapsed Trans.
    Send Recv Size Size Time Rate
    bytes Bytes bytes bytes secs. per sec

    16384 87380 1 1 10.00 26996.35
    16384 87380

    With the patch
    ===============

    Local /Remote
    Socket Size Request Resp. Elapsed Trans.
    Send Recv Size Size Time Rate
    bytes Bytes bytes bytes secs. per sec

    16384 87380 1 1 10.00 27291.86
    16384 87380

    The difference is within a one-percent range.

    Nesting cgroups doesn't seem to be the dominating factor as well,
    with nestings up to 10 levels not showing a significant performance
    difference.


    Glauber Costa (7):
    Basic kernel memory functionality for the Memory Controller
    socket: initial cgroup code.
    foundations of per-cgroup memory pressure controlling.
    per-cgroup tcp buffers control
    per-netns ipv4 sysctl_tcp_mem
    tcp buffer limitation: per-cgroup limit
    Display current tcp memory allocation in kmem cgroup

    Documentation/cgroups/memory.txt | 31 +++-
    crypto/af_alg.c | 7 +-
    include/linux/memcontrol.h | 84 +++++++++
    include/net/netns/ipv4.h | 1 +
    include/net/sock.h | 126 +++++++++++++-
    include/net/tcp.h | 14 +-
    include/net/udp.h | 3 +-
    include/trace/events/sock.h | 10 +-
    init/Kconfig | 11 ++
    mm/memcontrol.c | 354 +++++++++++++++++++++++++++++++++++++-
    net/core/sock.c | 93 +++++++---
    net/decnet/af_decnet.c | 21 ++-
    net/ipv4/proc.c | 7 +-
    net/ipv4/sysctl_net_ipv4.c | 71 +++++++-
    net/ipv4/tcp.c | 58 ++++---
    net/ipv4/tcp_input.c | 12 +-
    net/ipv4/tcp_ipv4.c | 18 ++-
    net/ipv4/tcp_output.c | 2 +-
    net/ipv4/tcp_timer.c | 2 +-
    net/ipv4/udp.c | 20 ++-
    net/ipv6/tcp_ipv6.c | 16 +-
    net/ipv6/udp.c | 4 +-
    net/sctp/socket.c | 35 +++-
    23 files changed, 876 insertions(+), 124 deletions(-)

    --
    1.7.6



    \
     
     \ /
      Last update: 2011-09-15 03:51    [W:0.029 / U:0.640 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site