[lkml]   [2009]   [Mar]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH 2/5] mm: remove unlikly NULL from kfree

    On Wed, 25 Mar 2009, Matt Mackall wrote:
    > >
    > > I think the theory is that gcc and the CPU can handle normal branch
    > > predictions well. But if you use one of the prediction macros, gcc
    > > (and some archs) behaves differently, such that taking the wrong branch
    > > can cost more than the time saved with all the other correct hits.
    > >
    > > Again, I'm not sure. I haven't done the bench marks. Perhaps someone else
    > > is more apt at knowing the details here.
    > >From first principles, we can make a reasonable model of branch
    > prediction success with a branch cache:
    > hot cache cold cache cold cache cold cache
    > w|w/o hint good hint bad hint
    > p near 0 + + + -
    > p near .5 0 0 0 0
    > p near 1 + - + -
    > (this assumes the CPU is biased against branching in the cold cache
    > case)
    > Branch prediction miss has a penalty measured in some smallish number of
    > cycles. So the impact in cycles/sec[1] is (p(miss) * penalty) * (calls /
    > sec). Because the branch cache kicks in and hides our unlikely hint with
    > a hot cache, we can't get a high calls/sec, so to have much impact,
    > we've got to have a very high probably of a missed branch (p near 1)
    > _and_ cold cache.
    > So for CPUs with a branch cache, unlikely hints only make sense in
    > fairly extreme cases. And I think that includes most CPU families these
    > days as it's pretty much required to get much advantage from making the
    > CPU clock faster than the memory bus.
    > We'd have a lot of trouble benchmarking this meaningfully as hot caches
    > kill the effect. And it would of course depend directly on a given CPU's
    > branch cache size and branch miss penalty, numbers that vary from model
    > to model. So I think unless we can confidently state that a branch is
    > rarely taken, there's very little upside to adding unlikely.
    > On the other hand, there's also very little downside until our hint is
    > grossly inaccurate. So there's a huge hysteresis here: if p is < .99,
    > not much point adding unlikely. If p is > .1, not much point removing
    > it.
    > [1] Note that cycles/sec is dimensionless as cycles and seconds are both
    > measures of time. So impact here is in units of very small fractions of
    > a percent.

    Hi Matt,

    Thanks for this info!

    Although gcc plays a role too. That is, if we have

    if (x)
    do something small;

    do something large;

    this can be broken into:

    cmp x
    beq 1f
    do something small
    do something large

    Which plays nice with the cache. But, by adding a unlikely(x), gcc will
    probably choose to do:

    cmp x
    bne 2f

    do something large


    do something small
    b 1b

    which hurts in a number of ways.

    -- Steve

     \ /
      Last update: 2009-03-25 17:37    [W:0.023 / U:8.272 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site