Messages in this thread | | | Subject | Re: 2.6.12-rc1-mm1: resume regression [update] (was: Re: 2.6.12-rc1-mm1: Kernel BUG at pci:389) | From | Len Brown <> | Date | 23 Mar 2005 20:03:00 -0500 |
| |
On Wed, 2005-03-23 at 18:49, Rafael J. Wysocki wrote: > Hi, > > On Wednesday, 23 of March 2005 23:39, Pavel Machek wrote: > > Hi! > > > > > > > > Will this do it for the moment? > > > > > > > > > > Its certainly better. > > > > > > > > With the Len's patch applied I have to unload the modules: > > > > > > > > ohci_hcd > > > > ehci_hcd > > > > yenta_socket > > > > > > > > before suspend as each of them hangs the box solid during either > > > > suspend or resume. Moreover, when I tried to load the ehci_hcd > > > > module back after resume, it hanged the box solid too.
Is this failure with suspend to RAM or to disk?
How about if you try this patch?
http://linux-acpi.bkbits.net:8080/to-akpm/cset@423b4875tyauh4CrSSoQfXOEPDkmUw
patch -Rp1 from 2.6.12-rc1-mm and see if it stops being broken or patch -Np1 to 2.6.12-rc and see if it starts being broken.
This one removes an earlier attempt at resuming PCI links -- now putting the onus on the drivers to be properly written to release and acquire their interrupt for a successful suspend/resume.
In theory, this is taken care of something like this: driver.resume pci_enable_device pci_enable_device_bars pcibios_enable_device pcibios_enable_irq acpi_pci_irq_enable
but if the patch above makes a difference, then theory != practice:-)
I'd believe that ohci_hcd and ehci_hcd are fragile since glancing at their lengthy .resume routines it isn't immediately obvious that they do this. But yenta_dev_resume has a pci_enable_device(), so that failure may be less straightforward.
cheers, -Len
ps. if point me to a full dmesg -s64000 from 2.6.12-rc1 acpi-enabled boot, that would help -- for it will show if we're even using pci interrupt links (and programming them) for these devices on this box.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |