Messages in this thread | ![/](/images/icornerl.gif) | | Date | Thu, 11 Apr 2024 18:14:35 -0700 | Subject | Re: Incorrect BPF stats accounting for fentry on arm64 | From | Yonghong Song <> |
| |
On 4/11/24 11:09 AM, Ivan Babrou wrote: > Hello, > > We're seeing incorrect data for bpf runtime stats on arm64. Here's an example: > > $ sudo bpftool prog show id 693110 > 693110: tracing name __tcp_retransmit_skb tag e37be2fbe8be4726 gpl > run_time_ns 2493581964213176 run_cnt 1133532 recursion_misses 1 > loaded_at 2024-04-10T22:33:09+0000 uid 62727 > xlated 312B jited 344B memlock 4096B map_ids 8550445,8550441 > btf_id 8726522 > pids prometheus-ebpf(2224907) > > According to bpftool, this program reported 66555800ns of runtime at > one point and then it jumped to 2493581675247416ns just 53s later when > we looked at it again. This is happening only on arm64 nodes in our > fleet on both v6.1.82 and v6.6.25. > > We have two services that are involved: > > * ebpf_exporter attaches bpf programs to the kernel and exports > prometheus metrics and opentelementry traces driven by its probes > * bpf_stats_exporter runs bpftool every 53s to capture bpf runtime metrics > > The problematic fentry is attached to __tcp_retransmit_skb, but an > identical one is also attached to tcp_send_loss_probe, which does not > exhibit the same issue: > > SEC("fentry/__tcp_retransmit_skb") > int BPF_PROG(__tcp_retransmit_skb, struct sock *sk) > { > return handle_sk((struct pt_regs *) ctx, sk, sk_kind_tcp_retransmit_skb); > } > > SEC("fentry/tcp_send_loss_probe") > int BPF_PROG(tcp_send_loss_probe, struct sock *sk) > { > return handle_sk((struct pt_regs *) ctx, sk, sk_kind_tcp_send_loss_probe); > } > > In handle_sk we do a map lookup and an optional ringbuf push. There is > no sleeping (I don't think it's even allowed on v6.1). It's > interesting that it only happens for the retransmit, but not for the > loss probe. > > The issue manifests some time after we restart ebpf_exporter and > reattach the probes. It doesn't happen immediately, as we need to > capture metrics 53s apart to produce a visible spike in metrics. > > There is no corresponding spike in execution count, only in execution time. > > It doesn't happen deterministically. Some ebpf_exporter restarts show > it, some don't. > > It doesn't keep happening after ebpf_exporter restart. It happens once > and that's it. > > Maybe recursion_misses plays a role here? We see none for > tcp_send_loss_probe. We do see some for inet_sk_error_report > tracepoint, but it doesn't spike like __tcp_retransmit_skb does. > > The biggest smoking gun is that it only happens on arm64.
I am not an expert for arm64. But you or somebody could check and compare arm64 and x86 jit trampoline codes to see whether anything is suspicious.
> > I'm happy to try out patches to figure this one out. >
| ![\](/images/icornerr.gif) |