[lkml]   [2006]   [Oct]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [Patch 0/5] I/O statistics through request queues
    Jens Axboe wrote:
    > On Tue, Oct 24 2006, Martin Peschke wrote:
    >> Jens Axboe wrote:
    >>>> Our tests indicate that the blktrace approach is fine for performance
    >>>> analysis as long as the system to be analysed isn't too busy.
    >>>> But once the system faces a consirable amount of non-sequential I/O
    >>>> workload, the plenty of blktrace-generated data starts to get painful.
    >>> Why haven't you done an analysis and posted it here? I surely cannot fix
    >>> what nobody tells me is broken or suboptimal.
    >> Fair enough. We have tried out the silly way of blktrace-ing, storing
    >> data locally. So, it's probably not worthwhile discussing that.
    > You'd probably never want to do local traces for performance analysis.
    > It may be handy for other purposes, though.

    "...probably not worthwhile discussing that" in the context of performance

    >>> I have to say it's news to
    >>> me that it's performance intensive, tests I did with Alan Brunelle a
    >>> year or so ago showed it to be quite low impact.
    >> I found some discussions on linux-btrace (Feburary 2006).
    >> There is little information on how the alleged 2 percent impact has
    >> been determined. Test cases seem to comprise formatting disks ...hmm.
    > It may sound strange, but formatting a large drive generates a huge
    > flood of block layer events from lots of io queued and merged. So it's
    > not a bad benchmark for this type of thing. And it's easy to test :-)

    Just wondering to what degree this might resemble I/O workloads run
    by customers in their data centers.

    >>> You'd be silly to locally store traces, send them out over the network.
    >> Will try this next and post complaints, if any, along with numbers.
    > Thanks! Also note that you do not need to log every event, just register
    > a mask of interesting ones to decrease the output logging rate. We could
    > so with some better setup for that though, but at least you should be
    > able to filter out some unwanted events.

    ...and consequently try to scale down relay buffers, reducing the risk of
    memory constraints caused by blktrace activation.

    >> However, a fast network connection plus a second system for blktrace
    >> data processing are serious requirements. Think of servers secured
    >> by firewalls. Reading some counters in debugfs, sysfs or whatever
    >> might be more appropriate for some one who has noticed an unexpected
    >> I/O slowdown and needs directions for further investigation.
    > It's hard to make something that will suit everybody. Maintaining some
    > counters in sysfs is of course less expensive when your POV is cpu
    > cycles.

    Counters are also cheaper with regard to memory consumption. Counters
    are probably cause less side effects, but are less flexible than
    full-blown traces.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2006-10-25 01:07    [W:0.023 / U:8.080 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site