Messages in this thread | | | Date | Wed, 9 Oct 2019 10:26:35 -0400 (EDT) | From | Mathieu Desnoyers <> | Subject | Re: x86/kprobes bug? (was: [PATCH 1/3] x86/alternatives: Teach text_poke_bp() to emulate instructions) |
| |
+ hpa, paulmck
----- On Oct 9, 2019, at 9:07 AM, Peter Zijlstra peterz@infradead.org wrote:
> On Fri, Oct 04, 2019 at 10:45:40PM +0900, Masami Hiramatsu wrote: > >> > > > > text_poke_bp(op->kp.addr, insn_buff, RELATIVEJUMP_SIZE, >> > > > > - op->optinsn.insn); >> > > > > + emulate_buff); >> > > > > } >> > > >> > > As argued in a previous thread, text_poke_bp() is broken when it changes >> > > more than a single instruction at a time. >> > > >> > > Now, ISTR optimized kprobes does something like: >> > > >> > > poke INT3 >> > >> > Hmm, it does this using text_poke(), but lacks a >> > on_each_cpu(do_sync_core, NULL, 1), which I suppose is OK-ish IFF you do >> > that synchronize_rcu_tasks() after it, but less so if you don't. >> > >> > That is, without either, you can't really tell if the kprobe is in >> > effect or not. >> >> Yes, it doesn't wait the change by design at this moment. > > Right, this might surprise some, I suppose, and I might've found a small > issue with it, see below. > >> > > synchronize_rcu_tasks() /* waits for all tasks to schedule >> > > guarantees instructions after INT3 >> > > are unused */ >> > > install optimized probe /* overwrites multiple instrctions with >> > > JMP.d32 */ >> > > >> > > And the above then undoes that by: >> > > >> > > poke INT3 on top of the optimzed probe >> > > >> > > poke tail instructions back /* guaranteed safe because the >> > > above INT3 poke ensures the >> > > JMP.d32 instruction is unused */ >> > > >> > > poke head byte back >> >> Yes, anyway, the last poke should recover another INT3... (for kprobe) > > It does indeed. > >> > > Is this correct? If so, we should probably put a comment in there >> > > explaining how all this is unusual but safe. > > So from what I can tell of kernel/kprobes.c, what it does is something like: > > ARM: (__arm_kprobe) > text_poke(INT3) > /* guarantees nothing, INT3 will become visible at some point, maybe */ > > (kprobe_optimizer) > if (opt) { > /* guarantees the bytes after INT3 are unused */ > syncrhonize_rcu_tasks(); > text_poke_bp(JMP32); > /* implies IPI-sync, kprobe really is enabled */ > } > > > DISARM: (__unregister_kprobe_top) > if (opt) { > text_poke_bp(INT3 + tail); > /* implies IPI-sync, so tail is guaranteed visible */ > } > text_poke(old); > > > FREE: (__unregister_kprobe_bottom) > /* guarantees 'old' is visible and the kprobe really is unused, maybe */ > synchronize_rcu(); > free(); > > > Now the problem is that I don't think the synchronize_rcu() at free > implies enough to guarantee 'old' really is visible on all CPUs. > Similarly, I don't think synchronize_rcu_tasks() is sufficient on the > ARM side either. It only provides the guarantee -provided- the INT3 is > actually visible. If it is not, all bets are off. > > I'd feel much better if we switch arch_arm_kprobe() over to using > text_poke_bp(). Or at the very least add the on_each_cpu(do_sync_core) > to it. > > Hmm?
Yes, I think you are right on both counts. synchronize_rcu() is not enough to guarantee that other cores have observed the required core serializing instructions.
I would also be more comfortable if we ensure core serialization for all cores after arming the kprobe with text_poke() (before doing the text_poke_bp to JMP32), and after the text_poke(old) in DISARM (before freeing, and possibly re-using, the memory).
I think originally it might have been OK to text_poke the INT3 without core serialization before introducing optimized kprobes, since it would only switch back and forth between the original instruction { 0xAA, 0xBB, 0xCC, ... } and the breakpoint { INT3, 0xBB, 0xCC, ... }. But now that the optimized kprobes are adding additional states, we end up requiring core serialization in case a core observes the original instruction and the optimized kprobes jump without observing the INT3.
The follow up patch you propose at https://lore.kernel.org/lkml/20191009132844.GG2359@hirez.programming.kicks-ass.net/ makes sense.
Now depending on whether we care mostly about speed or robustness in this code, there is a small tweak we could do. The approach you propose aims for robustness by issuing a text_poke_sync() after each ARM/DISARM, which effectively adds IPIs to all cores even in !opt cases. If we aim for speed in the !opt case, we might want to move the text_poke_sync() within the if (opt) branches so it only IPIs if the probe happens to be optimized.
In my personal opinion, I would prefer simple and robust over clever and fast for inserting kprobes, but you guys know more about the performance trade-offs than I do.
hpa provided very insightful feedback in the original text_poke_bp implementation thread with respect to those corner-cases, so having his feedback here would be great.
Thanks,
Mathieu
-- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com
| |