[lkml]   [2001]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Context switch times
    In article <>,
    Benjamin LaHaise <> wrote:
    >On Thu, Oct 04, 2001 at 02:52:39PM -0700, David S. Miller wrote:
    >> So the FPU hit is only before/after the runs, not during each and
    >> every iteration.
    >Right. Plus, the original mail mentioned that it was hitting all 8
    >CPUs, which is a pretty good example of braindead scheduler behaviour.


    That's not actually true (the braindead part, that i).

    We went through this with Ingo and Larry McVoy, and the sad fact is that
    to get the best numbers for lmbench, you simply have to do the wrong

    Could we try to hit just two? Probably, but it doesn't really matter,
    though: to make the lmbench scheduler benchmark go at full speed, you
    want to limit it to _one_ CPU, which is not sensible in real-life
    situations. The amount of concurrency in the context switching
    benchmark is pretty small, and does not make up for bouncing the locks
    etc between CPU's.

    However, that lack of concurrency in lmbench is totally due to the
    artificial nature of the benchmark, and the bigger-footprint scheduling
    stuff (that isn't reported very much in the summary) are more realistic.

    So 2.4.x took the (painful) road of saying that we care less about that
    particular benchmark than about some other more realistic loads.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:17    [W:0.025 / U:34.804 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site