lkml.org 
[lkml]   [2021]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [linux-next:master 3857/7963] arch/x86/crypto/sm4-aesni-avx-asm_64.o: warning: objtool: sm4_aesni_avx_crypt8()+0x8: sibling call from callable instruction with modified stack frame
From
Date


On 9/21/21 1:56 AM, Josh Poimboeuf wrote:
> From: Josh Poimboeuf <jpoimboe@redhat.com>
> Subject: [PATCH] x86/crypto/sm4: Fix frame pointer stack corruption
>
> sm4_aesni_avx_crypt8() sets up the frame pointer (which includes pushing
> RBP) before doing a conditional sibling call to sm4_aesni_avx_crypt4(),
> which sets up an additional frame pointer. Things will not go well when
> sm4_aesni_avx_crypt4() pops only the innermost single frame pointer and
> then tries to return to the outermost frame pointer.
>
> Sibling calls need to occur with an empty stack frame. Do the
> conditional sibling call *before* setting up the stack pointer.
>
> This fixes the following warning:
>
> arch/x86/crypto/sm4-aesni-avx-asm_64.o: warning: objtool: sm4_aesni_avx_crypt8()+0x8: sibling call from callable instruction with modified stack frame
>
> Fixes: a7ee22ee1445 ("crypto: x86/sm4 - add AES-NI/AVX/x86_64 implementation")
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Arnd Bergmann <arnd@kernel.org>
> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>

Thanks for your fix.

Reviewed-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>

Thanks.

> ---
> arch/x86/crypto/sm4-aesni-avx-asm_64.S | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/crypto/sm4-aesni-avx-asm_64.S b/arch/x86/crypto/sm4-aesni-avx-asm_64.S
> index fa2c3f50aecb..a50df13de222 100644
> --- a/arch/x86/crypto/sm4-aesni-avx-asm_64.S
> +++ b/arch/x86/crypto/sm4-aesni-avx-asm_64.S
> @@ -367,10 +367,12 @@ SYM_FUNC_START(sm4_aesni_avx_crypt8)
> * %rdx: src (1..8 blocks)
> * %rcx: num blocks (1..8)
> */
> - FRAME_BEGIN
>
> cmpq $5, %rcx;
> jb sm4_aesni_avx_crypt4;
> +
> + FRAME_BEGIN
> +
> vmovdqu (0 * 16)(%rdx), RA0;
> vmovdqu (1 * 16)(%rdx), RA1;
> vmovdqu (2 * 16)(%rdx), RA2;
>

\
 
 \ /
  Last update: 2021-09-22 04:30    [W:5.021 / U:0.360 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site