lkml.org 
[lkml]   [2015]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [GIT PULL] kdbus for 4.1-rc1
    On Fri, Apr 24, 2015 at 08:45:15AM +0200, Greg Kroah-Hartman wrote:
    > On Fri, Apr 24, 2015 at 08:36:03AM +0200, Greg Kroah-Hartman wrote:
    > > On Thu, Apr 23, 2015 at 10:56:40PM +0200, Borislav Petkov wrote:
    > > > > Hm, this seems to be to be O(1), pretty constant, we do the same amount
    > > > > of work all the time.
    > > >
    > > > The same *pile* of unnecessary and needless work. You go and collect
    > > > *all* that data on *every* packet send?!
    > >
    > > No, not at all, the metadata is cached, we only collect that for the
    > > first message sent, if we didn't know it already, or we do it on the
    > > "open" of the connection, depending on what we are gathering metadata
    > > for.
    > >
    > > The mc->collected test right before collecting the specific metadata is
    > > that "cached or not" test.
    >
    > Oh wait, no, there are some send-time metadata that is collected for
    > every message, see Linus's email for more details about that. Maybe
    > this can be changed to cache things even more than we currently do.
    >
    > it's early, shouldn't write emails before coffee...
    >
    > David had some flamegraphs floating around that showed where all the
    > time on transmit / receive was being spent, and I don't think that the
    > metadata area was all that relevant, but I can't find them anymore to
    > say for sure. There are other areas that can be sped up on the send
    > path, but perf data is the best way to verify this.

    Here's the graphs that he posted during the last code review cycle that
    are relevant here:
    http://lkml.iu.edu/hypermail/linux/kernel/1503.2/02624.html

    greg k-h


    \
     
     \ /
      Last update: 2015-04-24 11:01    [W:4.113 / U:0.272 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site