lkml.org 
[lkml]   [2011]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: Soft lockup during suspend since ~2.6.36 [bisected]
On Sun, Apr 17, 2011 at 21:35, Arnd Bergmann <arnd@arndb.de> wrote:
> On Thursday 14 April 2011, Thilo-Alexander Ginkel wrote:
>> All right... I verified all my bisect tests and actually found yet
>> another bug. After correcting that one (and verifying the correctness
>> of the other tests), git bisect actually came up with a commit, which
>> makes some more sense:
>>
>> | e22bee782b3b00bd4534ae9b1c5fb2e8e6573c5c is the first bad commit
>> | commit e22bee782b3b00bd4534ae9b1c5fb2e8e6573c5c
>> | Author: Tejun Heo <tj@kernel.org>
>> | Date:   Tue Jun 29 10:07:14 2010 +0200
>> |
>> |     workqueue: implement concurrency managed dynamic worker pool
>
> Is it possible to make it work by reverting this patch in 2.6.38?

Unfortunately, that's not that easy to test as the reverted patch does
not apply cleanly against 2.6.38 (23 failed hunks) and I am not sure
whether I want to revert it manually ;-).

>> The good news is that I am able to reproduce the issue within a KVM
>> virtual machine, so I am able to test for the soft lockup (which
>> somewhat looks like a race condition during worker / CPU shutdown) in
>> a mostly automated fashion. Unfortunately, that also means that this
>> issue is all but hardware specific, i.e., it most probably affects all
>> SMP systems (with a varying probability depending on the number of
>> CPUs).
>>
>> Adding some further details about my configuration (which I replicated
>> in the VM):
>> - lvm running on top of
>> - dmcrypt (luks) running on top of
>> - md raid1
>>
>> If anyone is interested in getting hold of this VM for further tests,
>> let me know and I'll try to figure out how to get it (2*8 GB, barely
>> compressible due to dmcrypt) to its recipient.
>
> Adding dm-devel to Cc, in case the problem is somewhere in there.

In the meantime I also figured out that 2.6.39-rc3 seems to fix the
issue (there have been some work queue changes, so this is somewhat
sensible) and that raid1 seems to be sufficient to trigger the issue.
Now one could try to figure out what actually fixed it, but if that
means another bisect series I am not too keen to perform that
exercise. ;-) If someone else feels inclined to do so, my test
environment is available for download, though:
https://secure.tgbyte.de/dropbox/lockup-test.tar.bz2 (~ 700 MB)

Boot using:
kvm -hda LockupTestRaid-1.qcow2 -hdb LockupTestRaid-2.qcow2 -smp 8
-m 1024 -curses

To run the test, log in as root / test and run:
/root/suspend-test

Regards,
Thilo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2011-04-17 23:57    [W:0.414 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site