lkml.org 
[lkml]   [2018]   [Oct]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v9 2/3]: perf record: enable asynchronous trace writing
From
Date
Hi,

On 05.10.2018 10:16, Namhyung Kim wrote:
> On Wed, Oct 03, 2018 at 07:12:10PM +0300, Alexey Budankov wrote:
<SNIP>
>> +static void record__aio_sync(struct perf_mmap *md)
>> +{
>> + struct aiocb *cblock = &md->cblock;
>> + struct timespec timeout = { 0, 1000 * 1000 * 1 }; // 1ms
>> +
>> + do {
>> + if (cblock->aio_fildes == -1 || record__aio_complete(md, cblock))
>> + return;
>> +
>> + while (aio_suspend((const struct aiocb**)&cblock, 1, &timeout)) {
>> + if (!(errno == EAGAIN || errno == EINTR))
>> + pr_err("failed to sync perf data, error: %m\n");
>
> Is there somthing we can do in this error case? Any chance it gets
> stuck in the loop?

Not really. Currently, in glibc, it can block on a mutex only.

>
>
>> + }
>> + } while (1);
>> +}
>> +
>> +static int record__aio_pushfn(void *to, struct aiocb *cblock, void *bf, size_t size)
>> +{
>> + off_t off;
>> + struct record *rec = to;
>> + int ret, trace_fd = rec->session->data->file.fd;
>> +
>> + rec->samples++;
>> +
>> + off = lseek(trace_fd, 0, SEEK_CUR);
>> + lseek(trace_fd, off + size, SEEK_SET);
>
> It'd be nice if these lseek() could be removed and use
> rec->bytes_written instead.

Well, this could be implemented like this avoiding lseek() in else branch:

off = lseek(trace_fd, 0, SEEK_CUR);
ret = record__aio_write(cblock, trace_fd, bf, size, off);
if (!ret) {
lseek(trace_fd, off + size, SEEK_SET);
rec->bytes_written += size;

if (switch_output_size(rec))
trigger_hit(&switch_output_trigger);
}
>
>
>> + ret = record__aio_write(cblock, trace_fd, bf, size, off);
>> + if (!ret) {
>> + rec->bytes_written += size;
>> +
>> + if (switch_output_size(rec))
>> + trigger_hit(&switch_output_trigger);
>
> Doesn't it need the _sync() before the trigger? Maybe it should be
> moved to record__mmap_read_evlist() or so..

Currently trigger just updates variable state.
The state is then checked thru separate API at __cmd_record() where
record__mmap_read_sync() is called prior switching to a new trace file
or finishing collection.

>>
<SNIP>
>> if (map->base) {
>> +#ifndef HAVE_AIO_SUPPORT
>> if (perf_mmap__push(map, rec, record__pushfn) != 0) {
>> rc = -1;
>> goto out;
>> }
>> +#else
>> + if (!rec->opts.nr_cblocks) {
>> + if (perf_mmap__push(map, rec, record__pushfn) != 0) {
>> + rc = -1;
>> + goto out;
>> + }
>> + } else {
>> + /*
>> + * Call record__aio_sync() to wait till map->data buffer
>> + * becomes available after previous aio write request.
>> + */
>> + record__aio_sync(map);
>> + if (perf_mmap__aio_push(map, rec, record__aio_pushfn) != 0) {
>> + rc = -1;
>> + goto out;
>> + }
>> + }
>> +#endif
>
> If dummy aio functions are provided, the #ifdef can be removed and
> just use the #else part assuming opts.nr_cblocks == 0.

Yes, it looks a little bit cumbersome. Would this be more compact?

if (map->base) {
#ifdef HAVE_AIO_SUPPORT
if (!rec->opts.nr_cblocks) {
#endif
if (perf_mmap__push(map, rec, record__pushfn) != 0) {
rc = -1;
goto out;
}
#ifdef HAVE_AIO_SUPPORT
} else {
int idx;
/*
* Call record__aio_sync() to wait till map->data buffer
* becomes available after previous aio write request.
*/
idx = record__aio_sync(map, false);
if (perf_mmap__aio_push(map, rec, idx, record__aio_pushfn) != 0) {
rc = -1;
goto out;
}
}
#endif
}

Thanks,
Alexey

\
 
 \ /
  Last update: 2018-10-05 10:32    [W:1.211 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site