lkml.org 
[lkml]   [2015]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [BUG] perf report: ordered events and flushing bug
From
On Thu, Mar 12, 2015 at 3:34 PM, David Ahern <dsahern@gmail.com> wrote:
> On 3/12/15 1:23 PM, Stephane Eranian wrote:
>>>
>>> Rounds and flushing after them helps with the user experience -- at least
>>> for some commands. On systems with 1024 cpus perf data files get large
>>> quickly and the resulting analysis command can appear to hang for long
>>> periods (e.g., i have done 1 second data collections that generate 5GB
>>> files
>>> and take 30-45 minutes before perf shows any output).
>>>
>>> I have a patch to perf-record to allow a user to specify the poll
>>> timeout.
>>> That helps define the maximum time between 'rounds' on the record side
>>> and
>>> then commands that show each event (e.g, perf script) will show output
>>> much
>>> more quickly without the need to consume large amounts of memory or
>>> process
>>> the entire file before displaying events.
>>>
>> I can understand this but not at the cost of correctness. You have no
>> guarantee
>> that events will appear in the perf.data with a fixed maximum time window,
>> i.e.,
>> beyond the window, out-of-order events may be discarded.
>> There ought to be a better way of keeping the user aware of what's going
>> on
>> with the progress bar.
>
>
> Theoretically there are no events coming in out of order. Have you seen the
> comment before process_finished_round() in util/session.c?
>
> In practice it did happen occasionally and perf would dump:
> Warning: Timestamp below last timeslice flush
>
> and then terminate. That was removed recently by Jiri I think and events out
> of the window are discarded (??).
>
> In my case I don't want a progress bar, I want output and I don't need the
> entire file processed for output to start. For a while I was dropping back
> to the ftrace interface in debugfs b/c it was faster.
>
> For the JIT case you tried appending events at the end of the file but that
> makes them no where near the real time points so what you really need is to
> insert events into the stream. Perhaps the separate files option would help
> -- jit events in a separate file and inserted based on time as the files are
> processed?
>
That's how I had it initially but it mean you have to use perf inject or have
perf record inject at the end, i.e., two passes over the perf.data file. I can
certainly have this in perf inject.

What the point of having all the ordered event logic if you are saying events
must be saved in order. I don't think there is a way to make that guarantee
when monitoring multiple CPUs at the same time.

From you description it seems to me, you script wants to process data quickly.
Is this an interactive tool? If not, you need to go through all the
samples anyway.

> David


\
 
 \ /
  Last update: 2015-03-12 21:01    [W:1.130 / U:0.804 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site