[lkml]   [2006]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Lockless page cache test results
    On Thu, Apr 27 2006, Nick Piggin wrote:
    > Jens Axboe wrote:
    > >Hi,
    > >
    > >Running a splice benchmark on a 4-way IPF box, I decided to give the
    > >lockless page cache patches from Nick a spin. I've attached the results
    > >as a png, it pretty much speaks for itself.
    > >
    > >The test in question splices a 1GiB file to a pipe and then splices that
    > >to some output. Normally that output would be something interesting, in
    > >this case it's simply /dev/null. So it tests the input side of things
    > >only, which is what I wanted to do here. To get adequate runtime, the
    > >operation is repeated a number of times (120 in this example). The
    > >benchmark does that number of loops with 1, 2, 3, and 4 clients each
    > >pinned to a private CPU. The pinning is mainly done for more stable
    > >results.
    > Thanks Jens!
    > It's interesting, single threaded performance is down a little. Is
    > this significant? In some other results you showed me with 3 splices
    > each running on their own file (ie. no tree_lock contention), lockless
    > looked slightly faster on the same machine.

    I can't say for sure, as I haven't done enough of these runs to know for
    a fact if it's just a little fluctuation or actually statistically
    significant. The tests are quick to run, I'll do a series of single
    thread runs tomorrow to tell you.

    > It could well be that the speculative get_page operation is naturally
    > a bit slower on Itanium CPUs -- there is a different mix of barriers,
    > reads, writes, etc. If only someone gave me an IPF system... ;)

    I'll gladly trade the heat and noise generation of that beast with you

    I can do the same numbers on a 2-way em64t for comparison, that should
    get us a little better coverage.

    > As you said, it would be nice to see how this goes when the other end
    > are 4 gigabit pipes or so... And then things like specweb and file
    > serving workloads.

    Yes, for now I just consider the /dev/null splicing an extremely fast
    and extremely light weigth interconnect :-)

    Jens Axboe

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2006-04-26 21:48    [W:0.137 / U:6.948 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site