lkml.org 
[lkml]   [2018]   [Apr]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH for-next v2] tracing: optimize trace_buffer_iter()
Date
From: yuan linyu <Linyu.Yuan@alcatel-sbell.com.cn>

use conditional operation.

Signed-off-by: yuan linyu <Linyu.Yuan@alcatel-sbell.com.cn>
---
kernel/trace/trace.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 6fb46a06c9dc..bda717ab2095 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -582,9 +582,7 @@ static __always_inline void trace_clear_recursion(int bit)
static inline struct ring_buffer_iter *
trace_buffer_iter(struct trace_iterator *iter, int cpu)
{
- if (iter->buffer_iter && iter->buffer_iter[cpu])
- return iter->buffer_iter[cpu];
- return NULL;
+ return iter->buffer_iter ? iter->buffer_iter[cpu] : NULL;
}

int tracer_init(struct tracer *t, struct trace_array *tr);
--
2.14.1

\
 
 \ /
  Last update: 2018-04-08 13:37    [W:0.022 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site