lkml.org 
[lkml]   [2017]   [Oct]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: [PATCH V2 0/5] event synthesization multithreading for perf record
    Date
    > 
    > * kan.liang@intel.com <kan.liang@intel.com> wrote:
    >
    > > From: Kan Liang <Kan.liang@intel.com>
    > >
    > > The event synthesization multithreading is introduced in ("perf top
    > > optimization") https://lkml.org/lkml/2017/9/29/269
    > > But it was not enabled for perf record. Because the process function
    > > process_synthesized_event was not multithreading friendly.
    > >
    > > The patch series temporarily stores the process result in per-thread
    > > file, which make the processing in parallel. Then it dumps the file
    > > one by one to the perf.data at the end of event synthesization.
    > >
    > > The source code is also available at
    > > https://github.com/kliang2/perf.git perf_record_opt
    > >
    > > Usually, the event synthesization only happens once on either start or end.
    > > With the snapshotting code, we synthesize events multiple times, once
    > > per each new perf.data file. Both of the cases are verified.
    > >
    > > Here are the latency test result on Knights Mill and Skylake server
    > >
    > > The workload is to compile Linux kernel as below "sudo nice make
    > > -j$(grep -c '^processor' /proc/cpuinfo)"
    > > Then, "sudo perf record -e cycles -a -- sleep 1"
    > >
    > > The latency is the time cost of __machine__synthesize_threads or its
    > > multithreading replacement, record__multithread_synthesize.
    > >
    > > - Latency on Knights Mill (272 CPUs)
    > >
    > > Original(s) With patch(s) Speedup
    > > 12.74 5.54 2.3X
    > >
    > > - Latency on Skylake server (192 CPUs)
    > >
    > > Original(s) With patch(s) Speedup
    > > 0.36 0.25 1.47X
    >
    > Btw., just as an interesting experiment, could you try to measure how it
    > performs to create just the per-CPU files, and *not* dump them into a single
    > file?
    >

    Sure, please find the experiment result in the cover letter of V3 patch series.

    Thanks,
    Kan

    > I.e. how much faster will it get if the serialization at the end is avoided?
    >
    > Of course nothing can read such per-CPU files yet, so this is just for scalability
    > measurement.
    >
    > Thanks,
    >
    > Ingo

    \
     
     \ /
      Last update: 2017-10-22 17:47    [W:3.499 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site