lkml.org 
[lkml]   [2020]   [Nov]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v8 17/18] scsi: megaraid_sas: Added support for shared host tagset for cpuhotplug
On Wed, Nov 4, 2020 at 11:38 PM John Garry <john.garry@huawei.com> wrote:
>
> On 04/11/2020 16:07, Kashyap Desai wrote:
> >>>
> >>> v5.10-rc2 is also broken here.
> >>
> >> John, Kashyap, any update on this? If this is going to take a while to fix
> >> it
> >> proper, should I send a patch to revert this or at least disable the
> >> feature by
> >> default for megaraid_sas in the meantime, so it no longer breaks the
> >> existing
> >> systems out there?
> >
> > I am trying to get similar h/w to try out. All my current h/w works fine.
> > Give me couple of days' time.
> > If this is not obviously common issue and need time, we will go with module
> > parameter disable method.
> > I will let you know.
>
> Hi Kashyap,
>
> Please also consider just disabling for this card, so any other possible
> issues are unearthed on other cards. I don't have this card or any x86
> machine to test it unfortunately to assist.
>
> BTW, just to be clear, did you try the same .config as Qian Cai?
>
> Thanks,
> John
I am able to hit the boot hang and similar kind of stack traces as
reported by Qian with shared .config on x86 machine.
In my case the system boots after a hang of 40-45 mins. Qian, is it
true for you as well ?
With module parameter -"host_tagset_enable=0", the issue is not seen.
Below is snippet of the dmesg logs/traces which are observed during
system bootup and after wait of 40-45 mins
drives attached to megaraid_sas adapter are discovered:

========================================
[ 1969.502913] INFO: task systemd-udevd:906 can't die for more than
1720 seconds.
[ 1969.597725] task:systemd-udevd state:D stack:13456 pid: 906
ppid: 858 flags:0x00000324
[ 1969.597730] Call Trace:
[ 1969.597734] __schedule+0x263/0x7f0
[ 1969.597737] ? __lock_acquire+0x576/0xaf0
[ 1969.597739] ? wait_for_completion+0x7b/0x110
[ 1969.597741] schedule+0x4c/0xc0
[ 1969.597743] schedule_timeout+0x244/0x2e0
[ 1969.597745] ? find_held_lock+0x2d/0x90
[ 1969.597748] ? wait_for_completion+0xa6/0x110
[ 1969.597750] ? wait_for_completion+0x7b/0x110
[ 1969.597752] ? lockdep_hardirqs_on_prepare+0xd4/0x170
[ 1969.597753] ? wait_for_completion+0x7b/0x110
[ 1969.597755] wait_for_completion+0xae/0x110
[ 1969.597757] __flush_work+0x269/0x4b0
[ 1969.597760] ? init_pwq+0xf0/0xf0
[ 1969.597763] work_on_cpu+0x9c/0xd0
[ 1969.597765] ? work_is_static_object+0x10/0x10
[ 1969.597768] ? pci_device_shutdown+0x30/0x30
[ 1969.597770] pci_device_probe+0x197/0x1b0
[ 1969.597773] really_probe+0xda/0x410
[ 1969.597776] driver_probe_device+0xd9/0x140
[ 1969.597778] device_driver_attach+0x4a/0x50
[ 1969.597780] __driver_attach+0x83/0x140
[ 1969.597782] ? device_driver_attach+0x50/0x50
[ 1969.597784] ? device_driver_attach+0x50/0x50
[ 1969.597787] bus_for_each_dev+0x74/0xc0
[ 1969.597789] bus_add_driver+0x14b/0x1f0
[ 1969.597791] ? 0xffffffffc04fb000
[ 1969.597793] driver_register+0x66/0xb0
[ 1969.597795] ? 0xffffffffc04fb000
[ 1969.597801] megasas_init+0xe7/0x1000 [megaraid_sas]
[ 1969.597803] do_one_initcall+0x62/0x300
[ 1969.597806] ? do_init_module+0x1d/0x200
[ 1969.597808] ? kmem_cache_alloc_trace+0x296/0x2d0
[ 1969.597811] do_init_module+0x55/0x200
[ 1969.597813] load_module+0x15f2/0x17b0
[ 1969.597816] ? __do_sys_finit_module+0xad/0x110
[ 1969.597818] __do_sys_finit_module+0xad/0x110
[ 1969.597820] do_syscall_64+0x33/0x40
[ 1969.597823] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 1969.597825] RIP: 0033:0x7f66340262bd
[ 1969.597827] Code: Unable to access opcode bytes at RIP 0x7f6634026293.
[ 1969.597828] RSP: 002b:00007ffca1011f48 EFLAGS: 00000246 ORIG_RAX:
0000000000000139
[ 1969.597831] RAX: ffffffffffffffda RBX: 000055f6720cf370 RCX: 00007f66340262bd
[ 1969.597833] RDX: 0000000000000000 RSI: 00007f6634b9880d RDI: 0000000000000006
[ 1969.597835] RBP: 00007f6634b9880d R08: 0000000000000000 R09: 00007ffca1012070
[ 1969.597836] R10: 0000000000000006 R11: 0000000000000246 R12: 0000000000000000
[ 1969.597838] R13: 000055f6720cce70 R14: 0000000000020000 R15: 0000000000000000
[ 1969.597859]
Showing all locks held in the system:
[ 1969.597862] 2 locks held by kworker/0:0/5:
[ 1969.597863] #0: ffff9af800194b38
((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1e6/0x5e0
[ 1969.597872] #1: ffffbf3bc01f3e70
((kfence_timer).work){+.+.}-{0:0}, at: process_one_work+0x1e6/0x5e0
[ 1969.597890] 3 locks held by kworker/0:1/7:
[ 1969.597960] 1 lock held by khungtaskd/643:
[ 1969.597962] #0: ffffffffa624cb60 (rcu_read_lock){....}-{1:2}, at:
rcu_lock_acquire.constprop.54+0x0/0x30
[ 1969.597982] 1 lock held by systemd-udevd/906:
[ 1969.597983] #0: ffff9af984a1c218 (&dev->mutex){....}-{3:3}, at:
device_driver_attach+0x18/0x50

[ 1969.598010] =============================================

[ 1983.242512] random: fast init done
[ 2071.928411] sd 0:2:0:0: [sda] 1951399936 512-byte logical blocks:
(999 GB/931 GiB)
[ 2071.928480] sd 0:2:2:0: [sdc] 1756889088 512-byte logical blocks:
(900 GB/838 GiB)
[ 2071.928537] sd 0:2:1:0: [sdb] 285474816 512-byte logical blocks:
(146 GB/136 GiB)
[ 2071.928580] sd 0:2:0:0: [sda] Write Protect is off
[ 2071.928625] sd 0:2:0:0: [sda] Mode Sense: 1f 00 00 08
[ 2071.928629] sd 0:2:2:0: [sdc] Write Protect is off
[ 2071.928669] sd 0:2:1:0: [sdb] Write Protect is off
[ 2071.928706] sd 0:2:1:0: [sdb] Mode Sense: 1f 00 00 08
[ 2071.928844] sd 0:2:2:0: [sdc] Mode Sense: 1f 00 00 08
[ 2071.928848] sd 0:2:0:0: [sda] Write cache: disabled, read cache:
enabled, doesn't support DPO or FUA


================================

I am working on it and need some time for debugging. BTW did anyone
try "shared host tagset" patchset on some other adapter/s which are
not really multiqueue at HW level
but driver exposes multiple hardware queues(similar to megaraid_sas)
with the .config shared by Qian ?

Thanks,
Sumit
[unhandled content-type:application/pkcs7-signature]
\
 
 \ /
  Last update: 2020-11-06 20:27    [W:0.463 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site