Messages in this thread | | | From | Andy Lutomirski <> | Date | Fri, 30 Oct 2015 18:44:19 -0700 | Subject | 4.3 regression: task_work corruption in vm86 mode? |
| |
Hi all-
In 4.3-rc7, running dosemu2 (https://github.com/stsp/dosemu2/) oopses the system very quickly, as long as CONFIG_VM86=y. It blows up because snd_seq_delete_port walks ports_list_head, finds two valid ports, and then starts finding obviously invalid pointers in the list.
git bisect blames:
commit 5ed92a8ab71f8865ba07811429c988c72299b315 Author: Brian Gerst <brgerst@gmail.com> Date: Wed Jul 29 01:41:19 2015 -0400
x86/vm86: Use the normal pt_regs area for vm86
I haven't spotted the problem yet. It seems to happen when task_work_run fires in get_signal, which happens before save_v86_state. I'm not entirely sure what causes task work to be scheduled at all while in v86 land. Could we somehow be processing task_work later than we should?
See below, too.
--Andy
On Fri, Oct 30, 2015 at 4:50 PM, Andy Lutomirski <luto@amacapital.net> wrote: > On Tue, Oct 27, 2015 at 7:05 AM, Stas Sergeev <stsp@list.ru> wrote: >> I archived my config and git hash. >> I can't easily post an Oops: under X it doesn't even appear - >> machine freezes immediately, and under non-KMS console it is >> possible to get one, but difficult to screen-shot (using bare >> metal, not VM). Also the Oops was seemingly unrelated. >> And if you run "dosemu -s" under non-KMS console, you'll also >> reproduce this one: >> https://bugzilla.kernel.org/show_bug.cgi?id=97321 > > Like this? > > [ 288.221786] BUG: unable to handle kernel paging request at ffffffb9 > [ 288.222475] IP: [<c169bf48>] snd_seq_delete_port+0x48/0xd0 > [ 288.222743] *pde = 01c8c067 *pte = 00000000 > [ 288.222743] Oops: 0000 [#1] SMP > [ 288.222743] Modules linked in: > [ 288.222743] CPU: 0 PID: 5480 Comm: dosemu.bin Not tainted 4.3.0-rc7+ #345 > [ 288.222743] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), > BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 > [ 288.222743] task: c7006b40 ti: c7bb4000 task.ti: c7bb4000 > [ 288.222743] EIP: 0060:[<c169bf48>] EFLAGS: 00010082 CPU: 0 > [ 288.222743] EIP is at snd_seq_delete_port+0x48/0xd0 > [ 288.222743] EAX: 00000000 EBX: ffffffb8 ECX: c707c67c EDX: 00000001 > [ 288.222743] ESI: c707c600 EDI: c707c684 EBP: c7bb5d60 ESP: c7bb5d48 > [ 288.222743] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 > [ 288.222743] CR0: 80050033 CR2: ffffffb9 CR3: 07b00000 CR4: 000406d0 > [ 288.222743] Stack: > [ 288.222743] 00000001 00000246 c707c68c c707c600 40a45321 c7bb5ee0 > c7bb5e14 c16965cb > [ 288.222743] 0000010f 00000000 00000000 00000000 00000000 00000000 > 00000000 00000000 > [ 288.222743] 00000000 00000000 00000000 00000000 00000000 00000000 > 00000000 00000000 > [ 288.222743] Call Trace: > [ 288.222743] [<c16965cb>] snd_seq_ioctl_delete_port+0x3b/0x90 > [ 288.222743] [<c1696c65>] snd_seq_do_ioctl+0x85/0x90 > [ 288.222743] [<c1696ca3>] snd_seq_kernel_client_ctl+0x33/0x50 > [ 288.222743] [<c169b78b>] snd_seq_event_port_detach+0x3b/0x50 > [ 288.222743] [<c169d6a2>] delete_port+0x12/0x30 > [ 288.222743] [<c169dbc1>] snd_seq_oss_release+0x41/0x50 > [ 288.222743] [<c169d406>] odev_release+0x26/0x40 > [ 288.222743] [<c11a46a3>] __fput+0xc3/0x1d0 > [ 288.222743] [<c11a47e8>] ____fput+0x8/0x10 > [ 288.222743] [<c10b924f>] task_work_run+0x6f/0x90 > [ 288.222743] [<c10017e5>] prepare_exit_to_usermode+0xd5/0x100 > [ 288.222743] [<c1001841>] syscall_return_slowpath+0x31/0x120 > [ 288.222743] [<c11bd094>] ? __close_fd+0x54/0x70 > [ 288.222743] [<c188b372>] syscall_exit_work+0x7/0xc > [ 288.222743] Code: 5f d0 1e 00 89 f8 e8 68 f0 1e 00 89 45 ec 8b 46 > 7c 8d 4e 7c 39 c1 74 25 8d 58 b8 0f b6 40 b9 8b 55 e8 39 d0 75 0d eb > 3b 8d 76 00 <0f> b6 40 b9 39 d0 74 30 8b 43 48 39 c1 8d 58 b8 75 ee 8b > 55 ec > [ 288.222743] EIP: [<c169bf48>] snd_seq_delete_port+0x48/0xd0 SS:ESP > 0068:c7bb5d48 > [ 288.222743] CR2: 00000000ffffffb9 > [ 288.222743] ---[ end trace f216bf40eb9b39d6 ]--- > > I'll try to narrow that down a little bit and email the appropriate maintainer. > > --Andy
-- Andy Lutomirski AMA Capital Management, LLC
| |