lkml.org 
[lkml]   [2012]   [Dec]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 -tip 3/4] tracing: make a snapshot feature available from userspace
Hi, Steven,

(2012/11/30 23:17), Steven Rostedt wrote:
[snip]
>
> Actually, I would have:
>
> status\input | 0 | 1 | else |
> --------------+------------+------------+------------+
> not allocated |(do nothing)| alloc+swap | EINVAL |
> --------------+------------+------------+------------+
> allocated | free | swap | clear |
> --------------+------------+------------+------------+
>
> Perhaps we don't need to do the clear on swap, just let the trace
> continue where it left off? But in case we should swap...
>

I think we don't need the clear on swap too.
I'll update my patches like this table.

> There's a fast way to clear the tracer. Look at what the wakeup tracer
> does. We can make that generic. If you want, I can write that code up
> too. Hmm, maybe I'll do that, as it will speed things up for
> everyone :-)
>

(I looked over the wakeup tracer, but I couldn't find that code...)

>
>> I think it is almost OK, but there is a problem.
>> When we echo "1" to the allocated snapshot, the clear operation adds
>> some delay because the time cost of tracing_reset_online_cpus() is in
>> proportion to the number of CPUs.
>> (It takes 72ms in my 8 CPU environment.)
>>
>> So, when the snapshot is already cleared by echoing "else" values, we
>> can avoid the delay on echoing "1" by keeping "cleared" status
>> internally. For example, we can add the "cleared" flag to struct tracer.
>> What do you think about it?
>>
>> >
>> > Also we can add a "trace_snapshot" to the kernel parameters to
have it
>> > allocated on boot. But I can add that if you update these patches.
>> >
>>
>> OK, I'll update my patches.
>
> This part (kernel parameter) can be a separate patch.
>

Yes.

>
>> > Either test here, or better yet, put the test into
>> > tracing_reset_online_cpus().
>> >
>> > if (!buffer)
>> > return;
>> >
>>
>> I see. I'll add the test to tracing_reset_online_cpus(). Should I make a
>> separated patch?
>
> It's a small change, you can add it to your patch or make it separate.
> I'll leave that up to you.
>

I see. Perhaps I'll make it separate.

>> [snip]
>> >> +static ssize_t tracing_snapshot_read(struct file *filp, char
__user *ubuf,
>> >> + size_t cnt, loff_t *ppos)
>> >> +{
>> >> + ssize_t ret = 0;
>> >> +
>> >> + mutex_lock(&trace_types_lock);
>> >> + if (current_trace && current_trace->use_max_tr)
>> >> + ret = -EBUSY;
>> >> + mutex_unlock(&trace_types_lock);
>> >
>> > I don't like this, as it is racy. The current tracer could change
after
>> > the unlock, and your back to the problem.
>> >
>>
>> You're right...
>> This is racy.
>>
>> > Now what we may be able to do, but it would take a little
checking for
>> > lock ordering with trace_access_lock() and
trace_event_read_lock(), but
>> > we could add the mutex to trace_types_lock to both s_start() and
>> > s_stop() and do the check in s_start() if iter->snapshot is true.
>> >
>>
>> If I catch your meaning, do s_start() and s_stop() become like following
>> code?
>> (Now, s_start() is used from two files - "trace" and "snapshot", so we
>> should change static "old_tracer" to per open-file.)
>
> Actually, lets nuke all the old_tracer totally, and instead do:
>
> if (unlikely(strcmp(iter->trace->name, current_trace->name) != 0)) {
>
> You can make this into a separate patch. You can add a check if
> current_trace is not NULL too, but I need to fix that, as current_trace
> should never be NULL (except for very early boot). But don't worry about
> that, I'll handle that.
>

O.K. I'll change all the old_tracer checking to the strcmp.

> Or I can write up this patch and send it to you, and you can include it
> in your series.
>
>> static void *s_start(struct seq_file *m, loff_t *pos)
>> {
>> struct trace_iterator *iter = m->private;
>> - static struct tracer *old_tracer;
>> ...
>> /* copy the tracer to avoid using a global lock all around */
>> mutex_lock(&trace_types_lock);
>> - if (unlikely(old_tracer != current_trace && current_trace)) {
>> - old_tracer = current_trace;
>> + if (unlikely(iter->old_tracer != current_trace && current_trace)) {
>> + iter->old_tracer = current_trace;
>> *iter->trace = *current_trace;
>> }
>> mutex_unlock(&trace_types_lock);
>>
>> + if (iter->snapshot && iter->trace->use_max_tr)
>> + return ERR_PTR(-EBUSY);
>> +
>> ...
>> }
>>
>> static void s_stop(struct seq_file *m, void *p)
>> {
>> struct trace_iterator *iter = m->private;
>>
>> + if (iter->snapshot && iter->trace->use_max_tr)
>> + return;
>
> This part shouldn't be needed, as if s_start fails it wont call
> s_stop(). But if you are paranoid (like I am ;-) then we can do:
>
> if (WARN_ON_ONCE(iter->snapshot && iter->trace->use_max_tr)
> return;

I think that seq_read() calls s_stop() even if s_start() failed.

seq_read()@fs/seq_file.c:

p = m->op->start(m, &pos);
while (1) {
err = PTR_ERR(p);
if (!p || IS_ERR(p))
break;
...
}
m->op->stop(m, p);

So, I think we need the check in s_stop(), don't we?


Thanks,
Hiraku Toyooka


\
 
 \ /
  Last update: 2012-12-07 03:41    [W:0.083 / U:0.640 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site