lkml.org 
[lkml]   [2019]   [Aug]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] coresight: tmc-etr: Fix updating buffer in not-snapshot mode.
Good day Yabin,

With this patch you are addressing a long time itch I had - please read on.

On Mon, 5 Aug 2019 at 17:37, Yabin Cui <yabinc@google.com> wrote:
>
> TMC etr always copies all available data to perf aux buffer, which
> may exceed the available space in perf aux buffer. It isn't suitable
> for not-snapshot mode, because:
> 1) It may overwrite previously written data.
> 2) It may make the perf_event_mmap_page->aux_head report having more
> or less data than the reality.
>
> Signed-off-by: Yabin Cui <yabinc@google.com>
> ---
> drivers/hwtracing/coresight/coresight-tmc-etr.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
> index 17006705287a..697e68d492af 100644
> --- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
> +++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
> @@ -1410,9 +1410,10 @@ static void tmc_free_etr_buffer(void *config)
> * tmc_etr_sync_perf_buffer: Copy the actual trace data from the hardware
> * buffer to the perf ring buffer.
> */
> -static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
> +static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf,
> + unsigned long to_copy)
> {
> - long bytes, to_copy;
> + long bytes;
> long pg_idx, pg_offset, src_offset;
> unsigned long head = etr_perf->head;
> char **dst_pages, *src_buf;
> @@ -1423,7 +1424,6 @@ static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
> pg_offset = head & (PAGE_SIZE - 1);
> dst_pages = (char **)etr_perf->pages;
> src_offset = etr_buf->offset;
> - to_copy = etr_buf->len;
>
> while (to_copy > 0) {
> /*
> @@ -1501,7 +1501,11 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
> spin_unlock_irqrestore(&drvdata->spinlock, flags);
>
> size = etr_buf->len;
> - tmc_etr_sync_perf_buffer(etr_perf);
> + if (!etr_perf->snapshot && size > handle->size) {
> + size = handle->size;
> + lost = true;
> + }

Perfect - this is in line with what is done for ETB and ETF.

> + tmc_etr_sync_perf_buffer(etr_perf, size);

Here tmc_etr_sync_perf_buffer() will copy data to the perf ring buffer
starting at @etr_perf->offset for @size, clipping the _end_ of the
trace data accumulated in the trace buffer. This is contrary to what
is done for ETB and ETF where the equivalent of @etr_perf->offset is
moved forward (clipping the _beginning_ of the trace data) in order to
keep as much of the end as possible.

I would rather enact the same heuristic here.

Thanks,
Mathieu

>
> /*
> * In snapshot mode we simply increment the head by the number of byte
> --
> 2.22.0.770.g0f2c4a37fd-goog
>

\
 
 \ /
  Last update: 2019-08-12 21:53    [W:0.297 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site