lkml.org 
[lkml]   [2016]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patches in this message
/
Date
From
SubjectRe: BUG: using smp_processor_id() in preemptible [00000000] during put_task_stack
On Fri 09-12-16 15:28:20, Michal Hocko wrote:
> Hi Andy,
> I am hitting the following
> [ 570.715345] BUG: using smp_processor_id() in preemptible [00000000] code: umount/6193
> [ 570.716880] caller is debug_smp_processor_id+0x17/0x19
> [ 570.717876] CPU: 2 PID: 6193 Comm: umount Tainted: G W 4.9.0-rc8-nofstest4-next-20161209-00012-g0db618dfb2cf #1017
> [ 570.720162] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.3-20161025_171302-gandalf 04/01/2014
> [ 570.720862] Call Trace:
> [ 570.720862] dump_stack+0x68/0x92
> [ 570.720862] check_preemption_disabled+0xce/0xe0
> [ 570.720862] debug_smp_processor_id+0x17/0x19
> [ 570.720862] vfree_atomic+0x2f/0x65
> [ 570.720862] put_task_stack+0xd0/0x13c
> [ 570.720862] kthread_stop+0x108/0x29a
> [ 570.720862] destroy_workqueue+0x167/0x1fb
> [ 570.720862] ext4_put_super+0x44/0x2fa
> [ 570.720862] generic_shutdown_super+0x6a/0xeb
> [ 570.720862] kill_block_super+0x27/0x67
> [ 570.720862] deactivate_locked_super+0x30/0x68
> [ 570.720862] deactivate_super+0x3e/0x41
> [ 570.720862] cleanup_mnt+0x58/0x76
> [ 570.720862] __cleanup_mnt+0x12/0x14
> [ 570.720862] task_work_run+0x77/0xa0
> [ 570.720862] exit_to_usermode_loop+0x67/0x92
> [ 570.720862] do_syscall_64+0x16c/0x197
> [ 570.720862] entry_SYSCALL64_slow_path+0x25/0x25
>
> it seems that vfre_atomic doesn't like it has preemption enabled in
> __vfree_deferred which calls this_cpu_ptr. I haven't studied the code
> deeply but I guess what we want is to put vfree_atomic into the atomic
> context.

OK, so I've double checked the code and I believe that the warning is
harmless. __vfree_deferred is indeed called from the preemptible context
but the current implementation should be safe wrt. preemption. llist_add
should work just fine even when preempted and never produce a corrupted
list AFAIU. schedule_work should be OK as well.

> Something like the following or at least disable preemption
> around vfree_atomic.

Scratch that. If anything this should be handled in __vfree_deferred
rather than hidden in the caller.

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 43f0608dddc6..581fe682e8d7 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1498,10 +1498,11 @@ static void __vunmap(const void *addr, int deallocate_pages)

static inline void __vfree_deferred(const void *addr)
{
- struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
+ struct vfree_deferred *p = &get_cpu_var(vfree_deferred);

if (llist_add((struct llist_node *)addr, &p->list))
schedule_work(&p->wq);
+ put_cpu_var(vfree_deferred);
}

/**
but maybe this is just too prudent and we should simply do:

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 43f0608dddc6..5ecd32258d6a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1498,7 +1498,11 @@ static void __vunmap(const void *addr, int deallocate_pages)

static inline void __vfree_deferred(const void *addr)
{
- struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred);
+ /*
+ * llist_add is preemption safe so we do not have to care
+ * about preemption being disabled here
+ */
+ struct vfree_deferred *p = raw_cpu_ptr(&vfree_deferred);

if (llist_add((struct llist_node *)addr, &p->list))
schedule_work(&p->wq);
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2016-12-12 10:32    [W:0.043 / U:1.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site