lkml.org 
[lkml]   [2006]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Patch][RFC] Disabling per-tgid stats on task exit in taskstats
    Andrew Morton wrote:

    >On Thu, 29 Jun 2006 09:44:08 -0700
    >Paul Jackson <pj@sgi.com> wrote:
    >
    >
    >
    >>>You're probably correct on that model. However, it all depends on the actual
    >>>workload. Are people who actually have large-CPU (>256) systems actually
    >>>running fork()-heavy things like webservers on them, or are they running things
    >>>like database servers and computations, which tend to have persistent
    >>>processes?
    >>>
    >>>
    >>It may well be mostly as you say - the large-CPU systems not running
    >>the fork() heavy jobs.
    >>
    >>Sooner or later, someone will want to run a fork()-heavy job on a
    >>large-CPU system. On a 1024 CPU system, it would apparently take
    >>just 14 exits/sec/CPU to hit this bottleneck, if Jay's number of
    >>14000 applied.
    >>
    >>Chris Sturdivant's reply is reasonable -- we'll hit it sooner or later,
    >>and deal with it then.
    >>
    >>
    >>
    >
    >I agree, and I'm viewing this as blocking the taskstats merge. Because if
    >this _is_ a problem then it's a big one because fixing it will be
    >intrusive, and might well involve userspace-visible changes.
    >
    >
    First off, just a reminder that this is inherently a netlink flow
    control issue...which was being exacerbated
    earlier by taskstats decision to send per-tgid data (no longer the case).

    But I'd like to know whats our target here ? How many messages per
    second do we want to be able to be sent
    and received without risking any loss of data ? Netlink will lose
    messages at a high enough rate so the design point
    will need to be known a bit.

    For statistics type usage of the genetlink/netlink, I would have thought
    that userspace, provided it is reliably informed
    about the loss of data through ENOBUFS, could take measures to just
    account for the missing data and carry on ?




    >The only ways I can see of fixing the problem generally are to either
    >
    >a) throw more CPU(s) at stats collection: allow userspace to register for
    > "stats generated by CPU N", then run a stats collection daemon on each
    > CPU or
    >
    >
    >b) make the kernel recognise when it's getting overloaded and switch to
    > some degraded mode where it stops trying to send all the data to
    > userspace - just send a summary, or a "we goofed" message or something.
    >
    >
    One of the unused features of genetlink that's meant for high volume
    data output from the kernel is
    the "dump" callback of a genetlink connection. Essentially kernel space
    keeps getting provided sk_buffs
    to fill which the netlink layer then supplies to user space (over time I
    guess ?)

    But whatever we do, there's going to be some limit so its useful to
    decide what the design point should be ?

    Adding Jamal for his thoughts on netlink's flow control in the context
    of genetlink.

    --Shailabh
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-06-29 21:13    [W:3.441 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site