lkml.org 
[lkml]   [2005]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: percpu_modalloc oops when loading netfilter modules
From
Date
On Sat, 2005-07-30 at 00:48 +0100, Daniel Drake wrote:
> Pete, Rusty,
>
> I found a snippet of a previous discussion of yours here:
>
> http://www.ussg.iu.edu/hypermail/linux/kernel/0408.3/2901.html
> http://www.ussg.iu.edu/hypermail/linux/kernel/0409.0/0768.html
>
> Did anything become of this issue?
>
> A Gentoo user has reported what appears to be the same problem on 2.6.12:
> http://bugs.gentoo.org/show_bug.cgi?id=97006

Name: Module per-cpu alignment cannot always be met.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (authored)

The module code assumes noone will ever ask for a per-cpu area more
than SMP_CACHE_BYTES aligned. However, as these cases show, gcc asks
sometimes asks for 32-byte alignment for the per-cpu section on a
module, and if CONFIG_X86_L1_CACHE_SHIFT is 4, we hit that BUG_ON().
This is obviously an unusual combination, as there have been few
reports, but better to warn than die.

See:
http://www.ussg.iu.edu/hypermail/linux/kernel/0409.0/0768.html

And more recently:
http://bugs.gentoo.org/show_bug.cgi?id=97006

Index: linux-2.6.13-rc4-git3-Netfilter/kernel/module.c
===================================================================
--- linux-2.6.13-rc4-git3-Netfilter.orig/kernel/module.c 2005-08-01 14:58:44.000000000 +1000
+++ linux-2.6.13-rc4-git3-Netfilter/kernel/module.c 2005-08-01 15:21:30.000000000 +1000
@@ -250,13 +250,18 @@
/* Created by linker magic */
extern char __per_cpu_start[], __per_cpu_end[];

-static void *percpu_modalloc(unsigned long size, unsigned long align)
+static void *percpu_modalloc(unsigned long size, unsigned long align,
+ const char *name)
{
unsigned long extra;
unsigned int i;
void *ptr;

- BUG_ON(align > SMP_CACHE_BYTES);
+ if (align > SMP_CACHE_BYTES) {
+ printk(KERN_WARNING "%s: per-cpu alignment %li > %i\n",
+ name, align, SMP_CACHE_BYTES);
+ align = SMP_CACHE_BYTES;
+ }

ptr = __per_cpu_start;
for (i = 0; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) {
@@ -348,7 +353,8 @@
}
__initcall(percpu_modinit);
#else /* ... !CONFIG_SMP */
-static inline void *percpu_modalloc(unsigned long size, unsigned long align)
+static inline void *percpu_modalloc(unsigned long size, unsigned long align,
+ const char *name)
{
return NULL;
}
@@ -1644,7 +1650,8 @@
if (pcpuindex) {
/* We have a special allocation for this section. */
percpu = percpu_modalloc(sechdrs[pcpuindex].sh_size,
- sechdrs[pcpuindex].sh_addralign);
+ sechdrs[pcpuindex].sh_addralign,
+ mod->name);
if (!percpu) {
err = -ENOMEM;
goto free_mod;
--
A bad analogy is like a leaky screwdriver -- Richard Braakman

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-08-01 07:34    [W:0.040 / U:1.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site