lkml.org 
[lkml]   [2022]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] x86/bitops: Remove unused __sw_hweight64() assembly implementation

* Nathan Chancellor <nathan@kernel.org> wrote:

> 4. modpost warning around __sw_hweight64
>
> With the first issue resolved:
>
> $ make -skj"$(nproc)" ARCH=i386 allmodconfig
> WARNING: modpost: EXPORT symbol "__sw_hweight64" [vmlinux] version ...
> Is "__sw_hweight64" prototyped in <asm/asm-prototypes.h>?

So I was hoping that this commit made explicit all the random indirect
header dependencies x86's <asm/asm-prototypes.h> imports on mainline:

headers/prep: x86/kbuild: Add symbol prototype header dependencies for modversions

... but a i386 case slipped through.

But, this actually highlights a real x86 symbol export bug IMO.

__arch_hweight64() on x86-32 is defined in the
arch/x86/include/asm/arch_hweight.h header as an inline, using
__arch_hweight32():


#ifdef CONFIG_X86_32
static inline unsigned long __arch_hweight64(__u64 w)
{
return __arch_hweight32((u32)w) +
__arch_hweight32((u32)(w >> 32));
}

*But* there's also a __sw_hweight64() assembly implementation:

arch/x86/lib/hweight.S

SYM_FUNC_START(__sw_hweight64)
#ifdef CONFIG_X86_64
...
#else /* CONFIG_X86_32 */
/* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */
pushl %ecx

call __sw_hweight32
movl %eax, %ecx # stash away result
movl %edx, %eax # second part of input
call __sw_hweight32
addl %ecx, %eax # result

popl %ecx
ret
#endif

But this __sw_hweight64 assembly implementation is unused - and it's
essentially doing the same thing that the inline wrapper does. Then we
export this unused helper with no prototype.

This went unnoticed in mainline, because mainline defines the prototype for
the unused prototype.

So I think the real solution to resolve this is by removing the unused
32-bit variant - see the patch below.

Thanks,

Ingo

======================>
From: Ingo Molnar <mingo@kernel.org>
Date: Sat, 8 Jan 2022 12:33:58 +0100
Subject: [PATCH] x86/bitops: Remove unused __sw_hweight64() assembly implementation

Header cleanups in the fast-headers tree highlighted that we have an
unused assembly implementation for __sw_hweight64():

WARNING: modpost: EXPORT symbol "__sw_hweight64" [vmlinux] version ...

__arch_hweight64() on x86-32 is defined in the
arch/x86/include/asm/arch_hweight.h header as an inline, using
__arch_hweight32():

#ifdef CONFIG_X86_32
static inline unsigned long __arch_hweight64(__u64 w)
{
return __arch_hweight32((u32)w) +
__arch_hweight32((u32)(w >> 32));
}

*But* there's also a __sw_hweight64() assembly implementation:

arch/x86/lib/hweight.S

SYM_FUNC_START(__sw_hweight64)
#ifdef CONFIG_X86_64
...
#else /* CONFIG_X86_32 */
/* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */
pushl %ecx

call __sw_hweight32
movl %eax, %ecx # stash away result
movl %edx, %eax # second part of input
call __sw_hweight32
addl %ecx, %eax # result

popl %ecx
ret
#endif

But this __sw_hweight64 assembly implementation is unused - and it's
essentially doing the same thing that the inline wrapper does.

Remove the assembly version and add a comment about it.

Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/lib/hweight.S | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/arch/x86/lib/hweight.S b/arch/x86/lib/hweight.S
index dbf8cc97b7f5..585e2f1372d0 100644
--- a/arch/x86/lib/hweight.S
+++ b/arch/x86/lib/hweight.S
@@ -36,8 +36,12 @@ SYM_FUNC_START(__sw_hweight32)
SYM_FUNC_END(__sw_hweight32)
EXPORT_SYMBOL(__sw_hweight32)

-SYM_FUNC_START(__sw_hweight64)
+/*
+ * No 32-bit variant, because it's implemented as an inline wrapper
+ * on top of __arch_hweight32():
+ */
#ifdef CONFIG_X86_64
+SYM_FUNC_START(__sw_hweight64)
pushq %rdi
pushq %rdx

@@ -66,18 +70,6 @@ SYM_FUNC_START(__sw_hweight64)
popq %rdx
popq %rdi
ret
-#else /* CONFIG_X86_32 */
- /* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */
- pushl %ecx
-
- call __sw_hweight32
- movl %eax, %ecx # stash away result
- movl %edx, %eax # second part of input
- call __sw_hweight32
- addl %ecx, %eax # result
-
- popl %ecx
- ret
-#endif
SYM_FUNC_END(__sw_hweight64)
EXPORT_SYMBOL(__sw_hweight64)
+#endif
\
 
 \ /
  Last update: 2022-01-08 12:39    [W:3.177 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site