Messages in this thread | ![/](/images/icornerl.gif) | | Date | Thu, 11 Apr 2024 23:37:42 -0700 | From | Pawan Gupta <> | Subject | Re: [PATCH 5/7] x86/bugs: Only harden syscalls when needed |
| |
On Thu, Apr 11, 2024 at 11:28:54PM -0700, Pawan Gupta wrote: > > +#define __do_syscall(table, func_direct, nr, regs) \ > > +({ \ > > + unsigned long __rax, __rdi, __rsi; \ > > + \ > > + asm_inline volatile( \ > > + ALTERNATIVE("call " __stringify(func_direct) "\n\t", \ > > + ANNOTATE_RETPOLINE_SAFE \ > > + "call *%[func_ptr]\n\t", \ > > + X86_FEATURE_INDIRECT_SAFE) \ > > + : "=D" (__rdi), "=S" (__rsi), "=a" (__rax), \ > > + ASM_CALL_CONSTRAINT \ > > + : "0" (regs), "1" (nr), [func_ptr] "r" (table[nr]) \ > > + : "rdx", "rcx", "r8", "r9", "r10", "r11", \ > > + "cc", "memory"); \ > > + \ > > + __rax; \ > > +}) > > This is a nice implementation, but I think we can avoid the complexity > by using cpu_feature_enabled(). As cpu_feature_enabled() is also runtime > patched, atleast the likely() path should be comparable to this. Please > let me know if you have any concerns with this approach. > > --- > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c > index 6de50b80702e..7c5332b83246 100644 > --- a/arch/x86/entry/common.c > +++ b/arch/x86/entry/common.c > @@ -49,7 +49,11 @@ static __always_inline bool do_syscall_x64(struct pt_regs *regs, int nr) > > if (likely(unr < NR_syscalls)) { > unr = array_index_nospec(unr, NR_syscalls); > - regs->ax = x64_sys_call(regs, unr); > + if (likely(cpu_feature_enabled(X86_FEATURE_INDIRECT_SAFE))) > + regs->ax = sys_call_table[unr](regs); > + else > + regs->ax = x64_sys_call(regs, unr);
BTW, this also solves the missing lfence case before the indirect call.
| ![\](/images/icornerr.gif) |