lkml.org 
[lkml]   [2015]   [Jun]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v0] bpf: BPF based latency tracing
On 6/18/15 4:40 AM, Daniel Wagner wrote:
> BPF offers another way to generate latency histograms. We attach
> kprobes at trace_preempt_off and trace_preempt_on and calculate the
> time it takes to from seeing the off/on transition.
>
> The first array is used to store the start time stamp. The key is the
> CPU id. The second array stores the log2(time diff). We need to use
> static allocation here (array and not hash tables). The kprobes
> hooking into trace_preempt_on|off should not calling any dynamic
> memory allocation or free path. We need to avoid recursivly
> getting called. Besides that, it reduces jitter in the measurement.
>
> CPU 0
> latency : count distribution
> 1 -> 1 : 0 | |
> 2 -> 3 : 0 | |
> 4 -> 7 : 0 | |
> 8 -> 15 : 0 | |
> 16 -> 31 : 0 | |
> 32 -> 63 : 0 | |
> 64 -> 127 : 0 | |
> 128 -> 255 : 0 | |
> 256 -> 511 : 0 | |
> 512 -> 1023 : 0 | |
> 1024 -> 2047 : 0 | |
> 2048 -> 4095 : 166723 |*************************************** |
> 4096 -> 8191 : 19870 |*** |
> 8192 -> 16383 : 6324 | |
> 16384 -> 32767 : 1098 | |

nice useful sample indeed!
The numbers are non-JITed, right?
JIT should reduce the measurement cost 2-3x, but preempt_on/off
latency probably will stay in 2k range.

> I am not sure if it is really worth spending more time getting
> the hash table working for the trace_preempt_[on|off] kprobes.
> There are so many things which could go wrong, so going with
> a static version seems for me the right choice.

agree. for this use case arrays are better choice anyway.
But I'll keep working on getting hash tables working even
in this extreme conditions. bpf should be always rock solid.

I'm only a bit suspicious of kprobes, since we have:
NOKPROBE_SYMBOL(preempt_count_sub)
but trace_preemp_on() called by preempt_count_sub()
don't have this mark...

> +SEC("kprobe/trace_preempt_off")
> +int bpf_prog1(struct pt_regs *ctx)
> +{
> + int cpu = bpf_get_smp_processor_id();
> + u64 *ts = bpf_map_lookup_elem(&my_map, &cpu);
> +
> + if (ts)
> + *ts = bpf_ktime_get_ns();

btw, I'm planning to add native per-cpu maps which will
speed up things more and reduce measurement overhead.

I think you can retarget this patch to net-next and send
it to netdev. It's not too late for this merge window.



\
 
 \ /
  Last update: 2015-06-18 19:21    [W:0.982 / U:0.956 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site